home search | Slides in PDF | contact up |
On the Philosophical, Statistical, and Computational Foundations of Inductive Inference and Intelligent Agents
Author: Marcus Hutter (2007) Comments: 104 slides Tutorial at: International Conference on Algorithmic Information Theory (ALT 2007), Sendai Slides: Motivation: The dream of creating artificial devices that reach or outperform human intelligence is an old one, however a computationally efficient theory of true intelligence has not been found yet, despite considerable efforts in the last 50 years. Nowadays most research is more modest, focussing on solving more narrow, specific problems, associated with only some aspects of intelligence, like playing chess or natural language translation, either as a goal in itself or as a bottom-up approach. The dual, top down approach, is to first find a formal (mathematical, not necessarily computational) solution of the general AI problem, and then to consider computationally feasible approximations. Note that the AI problem remains non-trivial even when ignoring computational aspects.
Inductive inference: A key property of intelligence is to learn from experience, build models of the environment from the acquired knowledge, and use these models for prediction. In philosophy this is called inductive inference, in statistics it is called estimation and prediction, and in computer science it is addressed by machine learning.
Intelligent agents: The second key property of intelligence is to exploit the learned predictive model for making intelligent decisions or actions. Together, in computer science this is called reinforcement learning, in engineering it is called adaptive control, and in statistics and other fields it is called sequential decision theory.
Contents: The tutorial will introduce the philosophical, statistical, and computational perspective of inductive inference, and Solomonoff's unifying universal solution. If time permits, also the unified view of the intelligent agent framework will be introduced. Putting everything together, we arrive at an elegant mathematical parameter-free theory of an optimal reinforcement learning agent embedded in an arbitrary unknown environment that possesses essentially all aspects of rational intelligence. We will argue that it represents a conceptual solution to the AI problem, thus reducing it to a pure computational problem.
Technical content: Despite the grand vision above, most of the tutorial necessarily is devoted to introducing the key ingredients of this theory, which are important subjects in their own right: Occam's razor; Turing machines; Kolmogorov complexity; probability theory; Solomonoff induction; Bayesian sequence prediction; minimum description length principle; intelligent agents; sequential decision theory; adaptive control theory; reinforcement learning; Levin search and extensions.
Literature:
- M. Hutter, On Universal Prediction and Bayesian Confirmation
Theoretical Computer Science, 384:1 (2007) 33-48- M. Hutter, Universal Artificial Intelligence: Sequential Decisions based on Algorithmic Probability
EATCS Book, Springer, Berlin (2005)
home search | Slides in PDF | contact up |