Translated from journal: The World of Intelligence 1, november/december 2005, pages 43-45

Measuring the intelligence of a machine

Dossier: Creating an artificial brain Marcus Hutter and Shane Legg defend a new method of measuring artificial intelligence. To this day no other test has succeeded in doing this. French Original by Cyril Fievet, English translation by Philippa Hutter and Patricia Altermatt.

In 1905 the French psychologist Alfred Binet and his colleague Theodore Simon gave birth to what is considered to be the first test of intelligence in history. The "Binet-Simon" test, initially intended to measure the intellectual maturity of children, developed into multiple versions, inspired many other tests, and aroused serious controversy. Stemming from debates over what is innate and what is acquired, and the influence of culture in the development of an individual's intelligence, the limited character of IQ tests has been greatly reproached. Moreover, these inevitably simplistic scales of measurement poorly illuminate the principle question: how to define intelligence? The revolution of new technologies poses a new question: can artificial intelligence be compared to that of humans?

Intelligence need not be defined only in the sense of human intelligence

The IBM chess computer would be awarded a very low intelligence measure. Transposed to the world of machines, intelligence tests have until present, either presented stumbling blocks or given place to new controversies. The most well known among these tests, the Turing test, has for a long time been the subject of virulent criticism, but has nevertheless given rise to a fierce yearly competition that is followed with interest (see below). In this context, proposing a new test of intelligence adapted to mechanical and electronic systems is like attempting the impossible. This is however what Marcus Hutter and Shane Legg, two researchers at the Dalle Molle institute of Artificial Intelligence (IDSIA) in Switzerland, have done. "One of the fundamental difficulties in artificial intelligence stems from the fact that nobody really knows what intelligence is, in particular where it concerns systems equipped with senses, environments, motivations, and cognitive capacities that are different to our own", they write.

Measuring the capacity of a machine to achieve its goals

Marcus Hutter and Shane Legg 
question overly anthropocentric definitions of artificial intelligence. Hutter and Legg outline the various properties that play a role in this type of intelligence: An entity called an "agent" interacts with a situation or an external problem "the environment" of which it has only partial knowledge. The ability of the agent to solve this problem supposes the existence of a goal, all together allowing the postulation of an informal definition of intelligence: "Intelligence measures the capacity of an agent to reach these defined goals in a large class of environments." The Swiss researchers then define a class of data and criteria designed to formalize these principles. For example, a returning signal, called the reward, measures the proximity of the agent to the goal it has been assigned: The agent simply tries to maximize the level of reward that it receives by learning the structure of the environment and by considering what it must achieve in order to receive the highest reward. The principle contribution of this theory is, without doubt, its great flexibility. The model can be applied to all types of systems, in particular, logical platforms or robotics equipped with diverse functions (mobility, visual and voice recognition, etc.). Moreover, as Hutter and Legg note, a very specialized system is penalized by this type of test. "The chess computer IBM Deep Blue [which beat the official world champion in 1997, NDLR] would be inefficient outside of its very specific environment. It would be rewarded with a very weak measure of universal intelligence. This is compatible with our vision of intelligence as having a strong ability to adapt", they continue. "It is necessary to understand that we are not trying to define intelligence in the human sense, but rather to consider intelligence as a more global concept, of which human intelligence is a specific case", explains Shane Legg.

The success of the Legg-Hutter test would mark a turning point in the world of research

For the moment, the aroused criticism regarding the principles of the test seem weak. "Some believe one cannot define intelligence with a formula and so conclude that our definition must be wrong. It is more a position of principle rather than a specific criticism," states Shane Legg, who further adds that in general the response to the test has been "very positive." "Many wish there was a clean definition of non-anthropocentric intelligence. It is surprising then that so few people have tried to define what intelligence is for a machine." In any case, a practical implementation of the Legg-Hutter test is awaited. As Blay Whitby emphasized in the last August edition of New Scientist, "There will be objections to this test, but it is a good start." And to conclude, "arriving at a definition of intelligence that functions for artificial intelligence might be one of the key points which direct the future of this discipline.

The Turing Test

No machine has ever won the 
Loebner conversation competition prize. Described in 1950 by the English mathematician, Alan Turing, the Turing test is imperative in measuring levels of intelligence demonstrated by a machine. The test assumes that if a human observer via an informal conversation cannot determine with certainty that he is in the process of speaking with an artificial entity, it must be considered intelligent. To this day, no machine has successfully passed the Turing test, despite the Loebner competition which has been held each year since 1991 and which offers a 100'000$ reward for a logical conversation capable of passing as human in the eyes of an expert jury. The Loebner prize is based on the same principle as the Turing test, however this approach has been highly criticized by numerous specialists who believe that it rests on an overly limited and anthropocentric vision of the notion of intelligence: A machine capable of passing the Turing test would not therefore be intelligent, on the contrary, many say that a machine could be intelligent without being capable of expressing itself like a human being.

For more information see ...

 © 2000 by ... [home] [search] [science] [personal] [contact] [up] ... Marcus Hutter