AGI Mail-archiv (excerpt on AIXI)


...
I think there *is* a "general problem of intelligence", and it's an unsolvable problem unless one has infinite computational resources.

Suppose we conceive intelligence as "the ability to achieve complex goals in complex environments."

With finite computational resources there are always going to be some complex goals that one can achieve better than others....

Hutter and Schmidhuber's mathematical approach to general intelligence basically verifies this idea, in a more formal & theoretical way...
...


...
I continue to like and follow their work, with some reservations. My reservations mostly are of the nature that some implicit assumptions are made without qualification that I can state for a fact should be substantially different from the essential implied qualification. Its great stuff, just subtly misleading in certain respects in that it causes you to ignore things that should be looked at more critically.
...
First of all, to mathematically formalize the AGI problem, one needs to formally define "intelligence."

There are many ways to do this. But, for many purposes, any definition of intelligence that has the general form "Intelligence is the maximization of a certain quantity, by a system interacting with a dynamic environment" can be handled in roughly the same way. It doesn't always matter exactly what the quantity being maximized is (whether it's "complexity of goals achieved" , for instance, or something else). My own definition of intelligence as "the ability to achieve complex goals in complex environments" -- which I've also formalized mathematically -- fits in here.

Let's use the term "behavior-based maximization criterion" to characterize the class of definitions of intelligence indicated in the previous paragraph.

So, sppose one has some particular behavior-based maximization criterion in mind. Then Marcus Hutter's work on the AIXI system, descrigives a software program that will be able to achieve intelligence according to the given criterion.

Now, there's a catch: this program may require infinite memory and an infinitely fast processor to do what it does. But he also gives a variant of AIXI which avoids this catch, by restricting attention to programs of bounded length L. Loosely speaking, the AIXItl variant will provably be as intelligent as any other computer program of length <= L, satisfying the maximization criterion, within a constant multiplicative factor and a constant additive factor.

Hutter's work draws on a long tradition of research into statistical learning theory and algorithmic information theory, mostly notably Solomonoff's early work on induction and Levin's work on computational measure theory. At the present time, though, this work is more exciting theoretically than pragmatically. The "constant factor" in his theorem may be very large, so that in practice, AIXItl is not really going to be a good way to create an AGI software program. In essence, what AIXItl is doing is searching the space of all programs of length L, evaluating each one, and finally choosing the best one and running it. The "constant factors" involved deal with the overhead of trying every other possible program before hitting on the best one!
...


...
I am very aware of these issues. The tractability issue isn't as bad as it seems, though it is implicit in the math. Hutter strongly implies a really ugly tractability problem, in no small part due to an exponential resource take-off, but it isn't as bad as it reads. In practice, the exponent can be sufficiently small (and much smaller than I think most people believe) that it becomes tractable for at least human-level AGI on silicon (my estimate), though it does hit a ramp sooner than later.
...
...
Inspired by a recent post, here is my attempt at a list of "serious AGI projects" underway on the planet at this time.

If anyone knows of anything that should be added to this list, please let me know.

...