What is (Artificial) Intelligence?
The science of Artificial
Intelligence (AI) might be defined as the construction
of intelligent systems and their analysis. A natural definition of systems
is anything which has an input and an output stream. Intelligence is more
complicated. It can have many faces like creativity,
solving problems, pattern
surviving in an environment, language
processing, knowledge and
many more. A formal definition incorporating every aspect of intelligence,
however, seems difficult.
An important observation is that most, if not all known facets of intelligence can be formulated as goal
driven or, more precisely, as maximizing
some utility function. It is, therefore, sufficient to study
goal driven AI. E.g. the (biological) goal of animals and humans is to
survive and spread. The goal of AI systems should be to be useful to
humans. The problem is that, except for special cases, we know neither the
utility function, nor the environment in which the system will operate, in
advance. The mathematical theory, coined AIXI, (see next item) represents a formal
solution of these problems.
Universal Artificial Intelligence.
Sequential decision theory formally
solves the problem of rational agents
in uncertain worlds if the true environmental
probability distribution is known. Solomonoff's theory of universal
induction formally solves the problem of sequence prediction
for unknown distribution. With a formal solution I mean a rigorous
mathematically definition, uniquely specifying the solution. I unified
both theories and gave strong arguments that the resulting
universal AIXI model behaves optimally
in any computable environment. I also made some progress
towards a computable AI theory.
I constructed an algorithm AIXItl,
which is superior to any other time t and space l
bounded agent. The computation time of AIXItl is of
the order t·2l. The constant 2l
is still too large to allow a direct implementation. My main focus is on
theoretical studies related to the AIXI model and on further reducing the
time-complexity of AIXItl down to a point where it
runs on todays computers with reasonable computation time. This apporach may be
characterized as a mathematical top-down approach
I am sponsoring a prize of up to 50'000€ for compressing human
knowledge, widely known as the Hutter Prize.
The compression contest is motivated by the fact that
being able to compress well is closely
related to acting intelligently, thus reducing
the slippery concept of intelligence to hard file size numbers.
In order to compress data, one has to find regularities in them,
which is intrinsically difficult (many researchers live from
analyzing data and finding compact models).
So compressors beating the current "dumb" compressors need to be
smart(er). Since the prize
wants to stimulate developing "universally" smart compressors,
we need a "universal" corpus of data.
Arguably the online encyclopedia
is a good snapshot of the Human World Knowledge.
So the ultimate compressor of it should "understand"
all human knowledge, i.e. be really smart.
My AI research projects (till 2006).
My AI research is centered around Universal
Artificial Intelligence in general and the optimal
in particular. I outlined for a number of problem
classes, including sequence prediction,
strategic games, function
minimization, reinforcement and supervised
learning, how they fit into the general AIXI model and how AIXI
formally solves them.
The fastest algorithm for all well-defined problems
arose from the construction of the time-bounded AIXItl model.
I proved tight error and
loss bounds for Solomonoff's universal sequence prediction
scheme. I also work on robust predictions,
optimization algorithms, including
a novel selection scheme for evolutionary algorithms,
a linear time approximation algorithm for the knapsack problem,
reinforcement planning, and direct policy search.
Early student work
is on classifier systems, reinforcement learning Hebb nets,
and computer graphics.
I made my PhD in particle physics and
performed applied research in a medical software company on various topics.
Kolmogorov defined the complexity of a string
as the length of its shortest description on a universal Turing machine.
A string is simple if it can be described by a short program,
like ``the string of one million ones'', and is complex if
there is no such short description, like for a random string whose
shortest description is specifying it bit-by-bit.
Kolmogorov complexity is a key concept in (algorithmic) information theory
and for universal sequence prediction.
The Algorithmic Information Theory page contains references to tutorials, courses,
books, papers, people homepages, events, and others.
There is a (moderated) mailing list
intended for people in Computer Sciences, Statistics, Mathematics,
and other areas or disciplines with interests in Kolmogorov Complexity,
Solomonoff Induction, Levin Search, Martin-Loef Randomness, and related areas.
Researchers in these fields are encouraged
to join the list and participate.
Recommended TextBooks and Courses
which are most relevant
for understanding and modelling intelligent behavior in general, and for
developing the AIXI model in particular.
For novices I suggest to start with the
Reinforcement Learning book by Sutton and Barto. It requires no background knowledge,
describes the key ideas, open problems, and great applications of this field.
Artificial Intelligence book by Russell and Norvig gives a comprehensive overview over
AI in general. The Kolmogorov Complexity book by Li and Vitanyi
is an excellent introduction to algorithmic information theory.
If you have some background knowledge in reinforcement learning or decision theory and algorithmic information theory you
may be interested in my Theory of Universal Artificial Intelligence.
I'm always looking for good PhD students
and occasionally have money for PostDocs.
In 2009 I hired Peter Sunehag as a PostDoc
for working on Universal AI&RL.
In 2008 I hired Rakib Ahmed as a PostDoc for working
on a computer vision project.
In 2006 I co-hired Paola Ranco as a PhD student for a
In 2005 I hired Daniil Ryabko as a PostDoc, and
in 2003 I hired Shane Legg for a PhD and
Jan Poland as a PostDoc
for investigating the AIXI model.
| © 2000 by ...
||... Marcus Hutter