The intention of this prize
is to give incentives for advancing
the field of Artificial Intelligence through the compression of
human knowledge. The better one can compress the encyclopedia Wikipedia, the better one can predict; and being able to predict well is key for being able to act intelligently. Compression needs to be lossless irrespectively
of the fact that the human brain is a lossy compressor.
The competition has been announced/discussed/reviewed in many news-magazines/groups/lists. Discussion being typically of low quality.
"Marcus Hutter and Shane Legg defend a new method of measuring artificial intelligence.
To this day no other test has succeeded in doing this.
The IBM chess computer would be awarded a very low intelligence measure.
Marcus Hutter and Shane Legg
question overly anthropocentric definitions of artificial intelligence.
"How do you tell just how smart your robot is? Simple: give it a universal IQ test.
Traditional measures of human intelligence often won't be appropriate for systems that have senses, environments and cognitive capacities very different from our own. So Shane Legg and Marcus Hutter at the Swiss Institute for Artificial Intelligence in Manno-Lugano have drafted an alternative test that will allow the intelligence of vision systems, robots, natural language processing programs or trading agents to be compared and contrasted despite their broad and disparate functions.
Although there is no consensus on what exactly human intelligence is, most views appear to cluster around the idea that it hinges on a general ability to achieve goals in a wide range of environments, says Legg.
The same can be applied to an AI system, by measuring its ability to carry out complex tasks within its particular environment, compared with all other environments.
'But there is a problem,' he says. Before putting this theory into practice the AI community will have to thrash out an agreement on just how complex each environment is. And that won't be easy. Under his definition, chess-playing computer Deep Blue would come out worse than a generalist learning algorithm, as it is only designed to carry out a very specific task."
ACM Reviews (27.Apr 2005,#CR131175) (cached)
"This strictly mathematical and information-theoretic book
seeks to solve the quest for the optimal and universal algorithm
of intelligent behavior and, to the extent that I have been able to verify and check this bold attempt, succeeds
So, the case is closed from a mathematician’s point of view
The book provides a clear exposition for nonspecialists interested
in the foundations of AI.
This work is a rare example of a book that really fulfills the goals stated on its blurb
Artificial Intelligence (2006) (cached)
"Hutter is very careful to both state his assumptions and
point out the limitations of AIXI. Regardless of how outlandish the
main claim of the book may sound (i.e., that AIXI is universally
optimal), we believe that within the context of his assumptions the
claim is justified, ... Just as Solomonoff’s work on universal induction prompted decades of
work by others ..., we expect the same to be true of Hutter’s work
on universal artificial intelligence."
"For anybody interested in universal theories of
artificial intelligence this book is a must ...
This is the only mathematical definition of universal AI I know of,
and maybe the only one possible."
"Research on artificial intelligence is moving toward the creation of intelligent systems able to learn by themselves from experience, as is the case with the human mind. This is the target that the research project 'Universal Artificial Intelligence' has set itself. Under the supervision of Marcus Hutter, this research is carried out at the Dalle Molle Institute for Artificial Intelligence (IDSIA), a joint institute of Università della Svizzera italiana (USI) and Scuola Universitaria Professionale della Svizzera Italiana (SUPSI). The funding is provided by the Swiss National Science Foundation (SNF) ..."
"Evolutionary computing is usually about making everybody better. It turns out that for complicated problems keeping the losers around is the way to go.
Computer algorithms that solve problems by mimicking evolution generally compare several possible answers that are represented as individuals made up of a group of genetic traits. The algorithms choose the best ones, immediately discard the rest, and mix the traits from the winners to produce a new generation to choose from. A researcher from Switzerland has taken a different approach with an evolutionary algorithm that shows promise for solving difficult problems with lots of possible answers. The key difference is the Fitness Uniform Selection Strategy (FUSS) algorithm doesn't immediately discard the losing combinations ..."
"A short excursion on the concept of infinity in theoretical computer science
in general, and computability theory and discreet calculation in particular.
Let us start with saying that the concept of infinity is not at all friendly in the branch of mathematics which we are concerned with, namely theoretical informatics. Such a concept is related to incomputability, which is the worst expression for any computer scientist, concerned with a particular problem ..."
Decision theory formally solves the problem of rational agents in uncertain worlds if the true environmental probability distribution is known. Solomonoff's theory of universal induction formally solves the problem of sequence prediction for unknown distribution. The AIXI model unifies these two well-known but very different ideas to one parameter-free theory of an optimal reinforcement learning agent embedded in an arbitrary unknown environment. Most if not all AI problems can easily be formulated within this theory, which reduces the conceptual problems to pure computational problems. Example problem classes are sequence prediction, strategic games, function minimization, reinforcement and supervised learning. Other issues of importance are intelligence order relations, the horizon problem and relations to other approaches to AI.