+-----------------------------------------------------------+
| Kolmogorov Mailing List Archive |
| 09 Sep 1999 - 25 Dec 2001 |
+-----------------------------------------------------------+
Date: Thu, 09 Sep 1999 ??
From: Olivier Bousquet ??
Subject: definition of Kolmogorov Complexity
In order to encourage you to start the discussion, I would like to submit
a topic although I am a beginner in the area:
One important thing about the definition of Kolmogorov Complexity is the
fact that it depends on a specific Turing Machine.
a function that's enumerable from above Although the invariance
theorem shows this dependence is limited to an additive constant, one may
wonder whether there is a way to define an 'objective' complexity
which would not depend on a Universal Turing Machine. Ray Solomonoff
found an interesting way of defining an 'objective' universal probability
distribution :
" The 'Objective' universal distribution is obtained as
follows: for each 3 tape Turing machine with unidirectional I/O
tapes, (either universal and non-universal machines), we can associate an
induced probability distribution. If we take a weighted average of
those distributions (weight being proportional to 2^(-length of
description of the machine's state transition table), we obtain a
universal distribution that doesn't choose any particular machine."
1. Is this an acceptable definition of an 'objective' universal
distribution ?
By the way, I would like to stress a point here : it looks like
Kolmogorov's
name is most often used when talking about this kind of complexity
measure, however he may not be considered as the unique originator of
the notion since Chaitin and Solomonoff discovered it independently.
Moreover, there is not only one such notion of algorithmic complexity
and the ones introduced by Kolmogorov, Chaitin and Solomonoff were not
all the same
______________________________________________________________________
To unsubscribe, write to kolmogorov-unsubscribe@listbot.com
Date: Fri, 10 Sep 1999 11:49:13 +0200 (MET DST)
From: John.Tromp@cwi.nl
Message-Id:
Subject: definition of Kolmogorov Complexity
Kolmogorov Complexity - http://www.stud.enst.fr/~obousque/kolmogorov.html
>In order to encourage you to start the discussion, I would like to submit
>a topic although I am a beginner in the area:
>One important thing about the definition of Kolmogorov Complexity is the
>fact
>that it depends on a specific Turing Machine.
This is unavoidable IMO.
a function that's enumerable from above Although the invariance
>theorem shows this dependence is limited to an additive constant, one may
>wonder whether there is a way to define an 'objective' complexity
>which would not depend on a Universal Turing Machine. Ray Solomonoff
>found an interesting way of defining an 'objective' universal probability
>distribution :
>" The 'Objective' universal distribution is obtained as
>follows: for each 3 tape Turing machine with unidirectional I/O
>tapes, (either universal and non-universal machines), we can associate an
>induced probability distribution. If we take a weighted average of
>those distributions (weight being proportional to 2^(-length of
>description of the machine's state transition table), we obtain a
>universal distribution that doesn't choose any particular machine."
There is little difference between choosing a particular machine
and choosing a particular distribution. This 'objective' distribution
corresponds to a universal machine which expects a description of
some machine's state transition table and simulates that machine on
the remainder of the input tape.
You still need to make lots of arbitrary choices in how to encode
the transition table...
>1. Is this an acceptable definition of an 'objective' universal
>distribution ?
It may not seem so 'objective' once you spell out the details of
the transition table encoding...
>By the way, I would like to stress a point here : it looks like
>Kolmogorov's
>name is most often used when talking about this kind of complexity
>measure, however he may not be considered as the unique originator of
>the notion since Chaitin and Solomonoff discovered it independently.
>Moreover, there is not only one such notion of algorithmic complexity
>and the ones introduced by Kolmogorov, Chaitin and Solomonoff were not
>all the same.
Uspenski et al. wrote some nice papers unifying the various measures,
they called them KP,KS,KA,KD, and KM.
In practice most people will keep using C and K sometimes leaving open
the question of whether they mean simple (KS) or prefix (KP) complexity...
regards,
John Tromp (http://www.cwi.nl/~tromp/)
______________________________________________________________________
To unsubscribe, write to kolmogorov-unsubscribe@listbot.com
Date: Sun, 10 Oct 1999 02:15:52 -0400 (EDT)
From: ray j solomonoff
Message-Id: <199910100615.CAA10208@world.std.com>
Subject: Reply to message #1
Kolmogorov Complexity - http://www.stud.enst.fr/~obousque/kolmogorov.html
kolmogorov@listbot.com Oct 9, 1999
Reply to message #1:
I have no disagreement with Jon Tromp's discussion: I don't
find the idea of a universal a priori probability distribution that
is independent of any specific Turing Machine to be a useful idea
in terms of my own interests -- A.I. and inductive inference.
From a practical point of view, the a priori probability that
one uses in solving an induction problem is a summary of the
probabilistic information that one had before being given the
information in the problem. Ordinarily this distribution will
depend on what problems and data one has encountered in the past.
One might ask the speculative, heuristic (and not necessarily
meaningful) question - what a priori distribution should one use if
one has *no* previous information? - Is there some kind of
"Platonic ideal" a priori distribution? My impression is that we
need not answer this question: that in all real world problems,
one *always* has "previous information". In many cases the
language in which a problem is described will contain much
information about the problem and how it is expected to be solved.
Program length considerations are useful in transforming this
linguistic information into a priori probability distributions.
These a priori distributions are not "objective". The
scientific community will usually not agree on any particular one
as being appropriate to a given problem. I feel that "objectivity"
is not a necessary virtue. Different people will come to the same
problem with different histories of problems and data - they should
use information in their past experience to solve present problems.
The reasons for the non-uniqueness of the "solution" to the
universal a priori distribution that I considered are just as Tromp
says: there is arbitrariness in the algorithm for ordering the
state tables, and one can use ordering schemes that give any
desired a priori distribution. In addition, there are many
formalisms equivalent to universal Turing Machines that are
equivalent to re-ordering the state tables --- e.g... Ordinary
computers with ability to ask for more memory; Post formal systems
of rewrite rules; certain analog dynamical systems have been shown
to be "universal; almost all of the computer languages that have
been invented can simulate universal machines (sometimes minor
modifications are needed); any of the adequate systems of formal
logic...
* * *
I am less objective on whether we should label this large set
of program length complexities by the general term "Kolmogorov
Complexity". I have a few objections:
1. As Tromp mentions, it's very ambiguous. Kolmogorov's name
was initially associated with two or three specific definitions of
Complexity. Since 1960, many more kinds of program-length
complexity have been proposed. They are often collectively
referred to as "Kolmogorov Complexity".
2. "Program-length complexity" would be a better way to refer
to these kinds of complexity. The name makes it clear that we are
not talking about any particular version. All versions have
certain common features, and one sometimes wants to refer to them
as a whole.
3. The term Kolmogorov Complexity gives people the idea that
Kolmogorov originated or claimed to originate the idea.
"Kolmogorov-Chaitin-Solomonoff Complexity" is occasionally used,
the with idea that it was independently discovered almost
simultaneously by 3 people. Solomonoff, Kolmogorov, and Chaitin,
first published in 1960, 1965 and 1966 respectively. In the times
of Archimedes and even as late as Newton, a 5 year difference might
have been regarded as "almost simultaneous". At the present time
this is no longer true. Upon learning of my 1960 publication
neither Kolmogorov nor Chaitin has ever claimed priority in this
area.
______________________________________________________________________
To unsubscribe, write to kolmogorov-unsubscribe@listbot.com
Date: Wed, 13 Oct 1999 14:30:08 +1000
From: David L Dowe
Message-Id: <199910130430.OAA02396@dec11.cs.monash.edu.au>
Subject: Kolmogorov complexity special issue in Computer Journal
To: undisclosed-recipients:;
Dear All,
The special issue of the Computer Journal on Kolmogorov complexity,
featuring
Wallace, C.S. and D.L. Dowe (1999). Minimum Message Length and
Kolmogorov Complexity, Computer Journal (special issue on Kolmogorov
complexity), Vol. 42, No. 4, pp270-283
and
Wallace, C.S. and D.L. Dowe (1999). Refinements of MDL and MML coding
(reply to J. J. Rissanen Computer Journal article), Computer Journal
(special issue on Kolmogorov complexity), Vol. 42, No. 4, pp330-337
and
Wallace, C.S. and D.L. Dowe (1999). Rejoinder (reply to 4 replies to our
Computer Journal article), Computer Journal (special issue on Kolmogorov
complexity), Vol. 42, No. 4, pp345-347
and many other articles is on-line at
http://www3.oup.co.uk/computer_journal/hdb/Volume_42/Issue_04/ .
Regards. - David.
______________________________________________________________________
To unsubscribe, write to kolmogorov-unsubscribe@listbot.com
From: "j.p.lewis"
Message-Id: <9911081503.ZM16723@greg.dqimages.com>
Date: Mon, 8 Nov 1999 15:03:36 -0800
X-Mailer: Z-Mail (3.2.3 08feb96 MediaMail)
To: bousquet@cmapx.polytechnique.fr
Subject: online paper announce
Hello, I just came across your excellent Kolmogorov Complexity site.
Here's a link to a forthcoming paper. It's a bit obvious for your audience,
but it addresses a new application, and one that is pretty large-
thousands of consultants who write books on managing the software process...)
---------------
"software crisis" = software rarely meets its promised delivery schedule
and is always buggy
"sofware engineering" = application of engineering principles to the
above problems
An article 'Formally unsolvable problems in software engineering'
that shows that software engineering cannot (ever) fully solve
the software crisis:
http://www.idiom.com/~zilla/Papers/softEstimation28ja.ps.gz
--
signature
______________________________________________________________________
To unsubscribe, write to kolmogorov-unsubscribe@listbot.com
From: "Kolmogorov Complexity"
To: "Kolmogorov Complexity"
Delivered-To: mailing list kolmogorov@listbot.com
Subject: Article in New Scientist
Kolmogorov Complexity - http://www.cmap.polytechnique.fr/~bousquet/kolmogorov.html
>Date: Sat, 13 Nov 1999 17:42:15 +0100 (MET)
>From: Paul.Vitanyi@cwi.nl
>The issue of New Scientist (6 Nov with "Snowball Earth" on the
>cover) has an article about our work in applications of Kolmogorov
>complexity (the "incompresibility method"). The cover mentions
>this "feature" article as "Taming wild numbers". The artice is
>called "On the Roll" (something about dice throws?) and is on
>pp 44-47 (4 pages).
______________________________________________________________________
To unsubscribe, write to kolmogorov-unsubscribe@listbot.com
From: Francois DENIS
X-Mailer: Mozilla 4.7 [en] (X11; I; FreeBSD 3.3-RELEASE i386)
X-Accept-Language: en
MIME-Version: 1.0
*********************************************************
*********** TAI 2000 ***************
*********************************************************
LILLE, FRANCE, 8-9 juin
******************************************************
***** Quatrièmes journées françaises sur *****
***** la Théorie Algorithmique de l'Information *****
******************************************************
******************************************************
***** 4th French Workshop on *****
***** Algorithmic Information Theory *****
******************************************************
Les quatrièmes journées françaises sur la théorie algorithmique de
l'information se tiendront à Lille les 8 et 9 juin 2000. Son thème
principal est : "Théorie algorithmique de l'information et inférence
inductive". Paul Vitanyi et Peter Gacs en seront les invités d'honneur.
Les renseignements concernant ce colloque peuvent être trouvés à
l'adresse suivante : http://www.grappa.univ-lille3.fr/tai2000
The 4th French Workshop on Algorithmic Information Theory will be held
in Lille on June 8th and 9th. The main thema of this workshop is
"Algorithmic Information Theory and Inductive Inference". Invited
speakers are professors Paul Vitanyi and Peter Gacs. Information about
this workshop can be found at :
http://www.grappa.univ-lille3.fr/tai2000
--
François DENIS
L.I.F.L. - GRAPPA
http://www.grappa.univ-lille3.fr/~denis
Adresse :
1. à Lille :
UFR de Mathématiques, Sciences Economiques et Sociales,
Université de Lille 3
59 653 Villeneuve d'Ascq Cedex
tel 03 20 41 61 75
fax 03 20 41 61 71
2. à Montpellier :
LIRMM
161, rue Ada,
34392 Montpellier cedex 5
tel 04 67 41 86 57
fax 04 67 41 85 00
______________________________________________________________________
To unsubscribe, write to kolmogorov-unsubscribe@listbot.com
Date: Wed, 23 Aug 2000 19:47:58 -0700 (PDT)
From: noisebrain
To: ashen@mccme.ru, bousquet@cmapx.polytechnique.fr,
cristian@cs.auckland.ac.nz, sophie.laplante@lri.fr,
paul.vitanyi@cwi.nl
Subject: statistical bounds on complexity
Message-ID:
Hello,
can anyone comment on the following:
You know that the halting set, the complexity K, etc. are not
computable. But suppose someone claimed that they could
compute these *statistically* and do better than chance,
i.e. they say they have a program E that, for a string s, prints
"s lies within complexity x...x+b" (for some bound b)
and is correct in this statement for more than (e.g.) 80 percent
of the strings you feed it.
We know that the halting set cannot be statistically estimated
in this way because the halting probability is algorithmically random.
Is there a similar statement in regards to a supposed statistical
estimator of algorithmic complexity?
(And if not, please give your comment on the following arguments:
1) because a formal system of a particular size cannot prove
a lower bound on strings much larger than its size,
a supposed statistical estimator will fail on at least the lower
bound part for sufficiently large strings.
2) Consider the statistical estimator
E: P(K(p) \in k..k+b) > 80%
as identifying the set B of strings of size k..k+b,
then consider this as part one of a two-part description (the set,
then identify the particular string within the set).
(?)
K(a|b) <= K(a|x) + K(x|b) + O(1)
so
K(K(s)|s) <= K(K(s)|B) + K(B|s)
The part K(K(s)|B) should be O(1) for a fixed size bound b,
more specifically it should be approximately log(b).
Then, since
K(K(s)|s) != O(1)
(complexity of complexity is not recursive, Li & Vitanyi 2ed p.226)
but K(K(s)|B) is O(1)
it must be that K(B|s) is also not recursively computable,
i.e., in the limit of large strings the supposed statistical
estimator cannot exist?
______________________________________________________________________
To unsubscribe, write to kolmogorov-unsubscribe@listbot.com
Date: Thu, 24 Aug 2000 19:34:38 +0400 (MSD)
In-Reply-To: from "Olivier Bousquet" at Aug 24, 2000 04:25:38 PM
From: shen@mccme.ru
Return-Receipt-To: shen@shen.mccme.ru
> To: "Kolmogorov Complexity"
> From: Paul.Vitanyi@cwi.nl
>
> Your assumption about the statistical estimation of the halting set
> isn't correct. It is known that one can compress the initial n-bit
> segment of the characteristic sequence to a log n bit string (Barzdin's
> lemma). So, flipping a fair coin you find the right compressed description
> with probability > 1/n and hence the first n bits of the chracteristic
> sequence. So, statistically, one has more than a chance of 1/1000
> of getting *all* halting answers correctly for the first 1000 programs.
>
> This is much better than by chance, where the probability would be
> only 2^{-1000}. I would think that estimating the complexity is
> even easier. If you want to do that statistically, you assume
> for example, that you have a uniformly randomly prepared string
> of length $n$. Then, with probability 1-2^c the complexity will be within
> c bits of n.
>
> Cheers, Paul
this raises the following (may be easy) question: for plain
complexity $n$ is a computable estimate of complexity that is
correct (up to O(1)) for 99% of strings of length $n$; can one
do the same for prefix complexity (i.e., find a computable
estimate that is O(1)-correct for 99% of all strings of any
given length $n$)?
______________________________________________________________________
To unsubscribe, write to kolmogorov-unsubscribe@listbot.com
Date: Thu, 24 Aug 2000 17:59:17 +0200 (MET DST)
From: Paul.Vitanyi@cwi.nl
Message-Id:
To: bousquet@cmapx.polytechnique.fr, kolmogorov@listbot.com
Subject: Re: statistical bounds on complexity (From S.Shen)
>From: shen@mccme.ru
>Subject: Re: statistical bounds on complexity (fwd)
>
>> To: "Kolmogorov Complexity"
>> From: Paul.Vitanyi@cwi.nl
>>
>> Your assumption about the statistical estimation of the halting set
>> isn't correct. It is known that one can compress the initial n-bit
>> segment of the characteristic sequence to a log n bit string (Barzdin's
>> lemma). So, flipping a fair coin you find the right compressed description
>> with probability > 1/n and hence the first n bits of the chracteristic
>> sequence. So, statistically, one has more than a chance of 1/1000
>> of getting *all* halting answers correctly for the first 1000 programs.
>>
>> This is much better than by chance, where the probability would be
>> only 2^{-1000}. I would think that estimating the complexity is
>> even easier. If you want to do that statistically, you assume
>> for example, that you have a uniformly randomly prepared string
>> of length $n$. Then, with probability 1-2^c the complexity will be within
>> c bits of n.
>>
>> Cheers, Paul
>
>this raises the following (may be easy) question: for plain
>complexity $n$ is a computable estimate of complexity that is
>correct (up to O(1)) for 99% of strings of length $n$; can one
>do the same for prefix complexity (i.e., find a computable
>estimate that is O(1)-correct for 99% of all strings of any
>given length $n$)?
That seems a more difficult question. However,
(a) there is a recursive function that coincides with the maximal
prefix complexity of n-length strings for infinitely many n.
(b) for every n, a positive fraction of all strings of length
n has at least maximal prefix complexity, and 99% of the strings
are within 10 bits of the maximum.
Consequently, there is a computable
estimate that is O(1)-correct for 99% of all strings of any
given length $n$ for the prefix complexity.
Cheers, Paul
______________________________________________________________________
To unsubscribe, write to kolmogorov-unsubscribe@listbot.com
Date: Thu, 24 Aug 2000 20:08:15 -0700 (PDT)
From: noisebrain
To: Paul.Vitanyi@cwi.nl
cc: shen@mccme.ru, bousquet@cmapx.polytechnique.fr, cristian@cs.auckland.ac.nz,
sophie.laplante@lri.fr
Subject: Re: statistical bounds on complexity
In-Reply-To:
Message-ID:
thanks!
> only 2^{-1000}. I would think that estimating the complexity is
> even easier. If you want to do that statistically, you assume
> for example, that you have a uniformly randomly prepared string
This is easy if the strings are required to be randomly chosen,
but what about if someone claims to have a procedure for statistically
estimating the complexity of *any* strings that are given to the
procedure (i.e., non-random strings from an unknown source)?
This is supposed function k(s) that prints K(s) correct within
a bound b: k(s) <= K(s) <= k(s)+b.
Are there problems with the following argument?:
(I'm new to this field, so I'm sure there are)
Consider k as part one of a two-part description: k defines
the set of strings with complexity within bound b of that indicated by k.
Then look at the complexity of identifying the complexity of the
particular string within that set:
general inequality:
K(a|b) <= K(a|X) + K(X|b) + O(1)
(below need this to be true if X is a set rather than a string)
Substitute 'complexity of complexity' for a:
K(K(s)|s) <= K(K(s)|B) + K(B|s)
s - the string to bound, B - the set of strings having complexity
within this bound.
The part K(K(s)|B) should be O(1) for a fixed size bound b --
it should be approximately log(b).
Then, since
K(K(s)|s) != O(1)
(complexity of complexity is not recursive, Li & Vitanyi 2ed p.226)
but K(K(s)|B) is O(1)
it must be that K(B|s) is also not recursively computable.
______________________________________________________________________
To unsubscribe, write to kolmogorov-unsubscribe@listbot.com
From: Olivier Bousquet
X-Sender: bousquet@georgev.polytechnique.fr
cc: anceau@cnam.fr, pinson.g@wanadoo.fr
Subject: Classification of Entropies
Message-ID:
Kolmogorov Complexity - http://www.cmap.polytechnique.fr/~bousquet/kolmogorov.html
Dear all,
Recently, G'erard Pinson created a classification of the existing notions
of entropy : from Shannon entropy to quantum Kolmgorov complexity, he
designed a table listing their various properties.
I believe this work is of high interest for people interested in
Kolmogorov Complexity and its variants. However, this classification has
to be refined, extended and discussed and this mailing list looks like a
good place for doing so.
I thus suggest you to review the classification at the following url :
http://www.cmap.polytechnique.fr/~bousquet/Kolmogorov/entropies.html
and to use the mailing list to discuss it.
Thank you for your interest.
Best Regards,
Olivier Bousquet.
______________________________________________________________________
To unsubscribe, write to kolmogorov-unsubscribe@listbot.com
From: "Bill Halchin"
Cc: bhalchin@hotmail.com
Subject: Prefix-free computable functions
Date: Fri, 05 Jan 2001 21:59:50
Message-ID:
X-OriginalArrivalTime: 05 Jan 2001 21:59:50.0187 (UTC) FILETIME=[D34C57B0:01C07762]
Sender: kolmogorov-return-18-16817135@listbot.com
Kolmogorov Complexity - http://www.cmap.polytechnique.fr/~bousquet/kolmogorov.html
Hello,
Is it true that the set/class of "prefix-free computable
functions" equals the set/class of computable functions? If not,
then how can we talk about having the universality property
or in other words, Turing-complete. In any paper/book, mention is
always made of U, a unversal computer, that takes programs represented
as prefix-free strings. If the above-mentioned two sets are not the
same then U cannot be universal! Please help me on this.
Regards,
Bill Halchin
______________________________________________________________________
To unsubscribe, write to kolmogorov-unsubscribe@listbot.com
Date: Sat, 6 Jan 2001 12:20:44 -0700 (MST)
From: Vladik Kreinovich
Reply-To: Vladik Kreinovich
Subject: Re: Prefix-free computable functions
Cc: bhalchin@hotmail.com
> Is it true that the set/class of "prefix-free computable
> functions" equals the set/class of computable functions?
Yes, it is true.
Actually, the easiest probably way to view it is to consider any standard
programming language like Pascal or C. The set of all programs in this language
is universal in the sense that any algorithm can be represented by a program in
Pascal.
On the other hand, the set of all such programs is clearly prefix-free, in the
sense that one program cannot be the beginning ("prefix") of the other. For
example, in many langauge, the program ends with a word "end" or with a period,
so there is no way that a program can contain another program as its beginning.
This is actually where this abstract notion of prefix-free strings cam from: as
a formalization of the programs in a programming language (since Kolmogorov
complexity of a string means the length of the shortest program which produces
this string).
> If not,
> then how can we talk about having the universality property
> or in other words, Turing-complete. In any paper/book, mention is
> always made of U, a unversal computer, that takes programs represented
> as prefix-free strings. If the above-mentioned two sets are not the
> same then U cannot be universal! Please help me on this.
>
>
> Regards,
>
> Bill Halchin
______________________________________________________________________
To unsubscribe, write to kolmogorov-unsubscribe@listbot.com
Date: Sun, 7 Jan 2001 15:24:50 +0100 (MET)
From: Paul.Vitanyi@cwi.nl
Message-Id:
Subject: Re: Prefix-free computable functions
Kolmogorov Complexity - http://www.cmap.polytechnique.fr/~bousquet/kolmogorov.html
>Hello,
>
>
> Is it true that the set/class of "prefix-free computable
>functions" equals the set/class of computable functions? If not,
Yes, that is true. Fairly obviosly. See for a proof for example
the Li-Vitanyi textbook.
>then how can we talk about having the universality property
>or in other words, Turing-complete. In any paper/book, mention is
>always made of U, a unversal computer, that takes programs represented
>as prefix-free strings. If the above-mentioned two sets are not the
>same then U cannot be universal! Please help me on this.
>
Dot worry. They are the same--see textbooks.
Paul Vitanyi
______________________________________________________________________
To unsubscribe, write to kolmogorov-unsubscribe@listbot.com
Date: Sat, 13 Jan 2001 05:21:19 -0800 (PST)
From: Flex Flex
Subject: Re: Why Kolmogorov complexity is not always computable?
Cc: kolmogorov@listbot.com
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
If I have a string X of length M I Know that the
programs that output X can be shorter
than X or can be longer about X when to output X I
need all the information X.
Now I try for all program shorter than X and take the
shorter program that produce X.
The problem is How can I determine that a program p
shorter than X is not an halting
program and so do not execute it ?
I have a SPACE LIMIT so I can determine if a program
halt or not!!
I know that the program that produce X must be shorter
than X
because in other case I know the program because it is
X .
So the space limit is M.
Now if I have a space limit I can say that a program P
of length M do not halt if it
go for more than one time in the same state where the
state is composed by the number
of line execution and the value of all memory used by
the program .
I can say that this is only case where a program do If
I have a string X of length M I Know that the programs
that output X can be shorter
than X or can be longer about X when to output X I
need all the information X.
Now I try for all program shorter than X and take the
shorter program that produce X.
The problem is How can I determine that a program p
shorter than X is not an halting
program and so do not execute it ?
I have a SPACE LIMIT so I can determine if a program
halt or not!!
I know that the program that produce X must be shorter
than X
because in other case I know the program because it is
X .
So the space limit is M.
Now if I have a space limit I can say that a program P
of length M do not halt if it
go for more than one time in the same state where the
state is composed by the number
of line execution and the value of all memory used by
the program .
I can say that this is only case where a program do
not halt because if I suppose that
a program do not halt and do not pass 2 times for the
same state this program must use
an infinity memory but this is out from hypothesis of
a space limit M!!!.
Moreover It is simple to calculate an upper bound to
compute a Kolmogorov complexity for
a string X .
I compute space-time upper bound and its request is
very big but It seem computable.
Well where is the problem?
--- Thomas English wrote:
> Antiga Denis wrote:
>
> > In the book "An Introduction to Kolmogorov
> Complexity
> > and Its Applications" Ming Li ,Paul Vitànyi
> > there is the proof of incomputability of
> Kolmogorov
> > Complexity.
> > I do not understand this proof and I think that
> the
> > Kolmogorov Complexity is always computable.
> >
> > Can anyone explain me why the Kolmogorov
> Complexity is
> > incomputable?
>
> In entirely intuitive terms, suppose that you have a
> length m binary string x, and you
> know that x is output by halting program p of length
> n. The crucial question is how
> you determine that there is no halting program p' of
> length k < n that outputs x.
> There generally exist shorter programs the do not
> halt, but there is no way for you to
> tell which programs run for a long time before
> halting and which run without halting.
> Thus you have no way to say, in the general case,
> that any program is the shortest
> that generates a given string, and Kolmogorov
> complexity is uncomputable.
>
> Hope that helps.
>
> Tom English
>
______________________________________________________________________
To unsubscribe, write to kolmogorov-unsubscribe@listbot.com
Date: Mon, 15 Jan 2001 14:53:37 +0100 (MET)
From: Paul.Vitanyi@cwi.nl
Message-Id:
Subject: Re: Why Kolmogorov complexity is not always computable?
Sender: kolmogorov-return-23-16817135@listbot.com
>
>In the book "An Introduction to Kolmogorov Complexity
>and Its Applications" Ming Li ,Paul Vitànyi
>there is the proof of incomputability of Kolmogorov
>Complexity.
>I do not understand this proof and I think that the
>Kolmogorov Complexity is always computable.
>
>Can anyone explain me why the Kolmogorov Complexity is
>incomputable?
>
>Thanks.
>
>Antiga Denis.
>
Dear Antiga,
Let's give a simple complete proof.
a) There are 2^n binary strings of length n and only \sum_{i=0}^{n-1} 2^i =
2^n - 1 binary programs of length < n. Hence there is one x (at least)
that is not computed by a program of length < n. This x has complexity
C(x) >= n.
b) Suppose C(.) were computable. Now, for every n there is a lexicographical
first x, say x_n, of complexity C(x_n) >= n (by a)).
But by the computability of C(.) we can compute the complexities of
all binary strings of length n, and hence find the lexicographical first x
among them that has complexity C(x) >= n. By definition this is x_n.
Since the only information we required to compute x_n was the number n
(can be given in log n bits) plus a fixed standard program,
we obtain C(x_n) <= log n + O(1). This contradicts C(x_n) >= n for
all n apart from a finite initial set of values of n. Hence C(x) is
not computable.
NOTE: C(x) can of course be approximated from above by a computable
function as is explained in Li-Vitanyi.
Cheers, Paul
______________________________________________________________________
To unsubscribe, write to kolmogorov-unsubscribe@listbot.com
Date: Fri, 19 Jan 2001 08:44:07 -0700 (MST)
From: Vladik Kreinovich
Reply-To: Vladik Kreinovich
Subject: complexity nearly made it to 24th Hilbert Problem amopng ones presented in 1900
Cc: longpre@cs.utep.edu
X-Mailer: dtmail 1.3.0 @(#)CDE Version 1.4 SunOS 5.8 sun4u sparc
This may be of interest to Kolmogorov complexity community.
A letter published in February 2001 Notices of the American Mathematical
Society, p. 167, mentions that "Professor Ruediger Thiele (Halle University)
has found in a notebook ... a passage in which Hilbert recalled including a
24th problem for his Paris address. It was concerned with finding criteria for
finding simplest proofs of theorems in mathematics".
______________________________________________________________________
To unsubscribe, write to kolmogorov-unsubscribe@listbot.com
Date: Sat, 20 Jan 2001 07:18:47 -0800 (PST)
From: Flex Flex
Subject: Re: Why Kolmogorov complexity is not always computable?
Cc: Kolmogorov@listbot.com
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Sender: kolmogorov-return-25-16817135@listbot.com
Ok.
But It possible to convert all programs that use a
space during computations to programs more length .
For example if I have a program with source code
length M and data used by program of length D I can
convert this program in another program of length
about
M+D .
Now you say that the program can use an illimitate
data but it means that the program is illimitate so it
can be more length than X.
Best regards,
Denis.
--- shen@mccme.ru wrote:
> `Space limit' you mention is for the _length of
> program_, but
> not for the space used during the computation.
>
> > If I have a string X of length M I Know that the
> > programs that output X can be shorter
> > than X or can be longer about X when to output X I
> > need all the information X.
> > Now I try for all program shorter than X and take
> the
> > shorter program that produce X.
> > The problem is How can I determine that a program
> p
> > shorter than X is not an halting
> > program and so do not execute it ?
> > I have a SPACE LIMIT so I can determine if a
> program
> > halt or not!!
> > I know that the program that produce X must be
> shorter
> > than X
> > because in other case I know the program because
> it is
> > X .
> > So the space limit is M.
> > Now if I have a space limit I can say that a
> program P
> > of length M do not halt if it
> > go for more than one time in the same state where
> the
> > state is composed by the number
> > of line execution and the value of all memory used
> by
> > the program .
> > I can say that this is only case where a program
> do If
> > I have a string X of length M I Know that the
> programs
> > that output X can be shorter
> > than X or can be longer about X when to output X I
> > need all the information X.
> > Now I try for all program shorter than X and take
> the
> > shorter program that produce X.
> > The problem is How can I determine that a program
> p
> > shorter than X is not an halting
> > program and so do not execute it ?
> > I have a SPACE LIMIT so I can determine if a
> program
> > halt or not!!
> > I know that the program that produce X must be
> shorter
> > than X
> > because in other case I know the program because
> it is
> > X .
> > So the space limit is M.
> > Now if I have a space limit I can say that a
> program P
> > of length M do not halt if it
> > go for more than one time in the same state where
> the
> > state is composed by the number
> > of line execution and the value of all memory used
> by
> > the program .
> > I can say that this is only case where a program
> do
> > not halt because if I suppose that
> > a program do not halt and do not pass 2 times for
> the
> > same state this program must use
> > an infinity memory but this is out from hypothesis
> of
> > a space limit M!!!.
> > Moreover It is simple to calculate an upper bound
> to
> > compute a Kolmogorov complexity for
> > a string X .
> > I compute space-time upper bound and its request
> is
> > very big but It seem computable.
> >
> > Well where is the problem?
> >
> > --- Thomas English wrote:
> > > Antiga Denis wrote:
> > >
> > > > In the book "An Introduction to Kolmogorov
> > > Complexity
> > > > and Its Applications" Ming Li ,Paul Vitànyi
> > > > there is the proof of incomputability of
> > > Kolmogorov
> > > > Complexity.
> > > > I do not understand this proof and I think
> that
> > > the
> > > > Kolmogorov Complexity is always computable.
> > > >
> > > > Can anyone explain me why the Kolmogorov
> > > Complexity is
> > > > incomputable?
> > >
> > > In entirely intuitive terms, suppose that you
> have a
> > > length m binary string x, and you
> > > know that x is output by halting program p of
> length
> > > n. The crucial question is how
> > > you determine that there is no halting program
> p' of
> > > length k < n that outputs x.
> > > There generally exist shorter programs the do
> not
> > > halt, but there is no way for you to
> > > tell which programs run for a long time before
> > > halting and which run without halting.
> > > Thus you have no way to say, in the general
> case,
> > > that any program is the shortest
> > > that generates a given string, and Kolmogorov
> > > complexity is uncomputable.
> > >
> > > Hope that helps.
> > >
> > > Tom English
> > >
> >
______________________________________________________________________
To unsubscribe, write to kolmogorov-unsubscribe@listbot.com
Date: Tue, 6 Mar 2001 09:02:53 +0100 (CET)
From: Olivier Bousquet
X-Sender: bousquet@georgev.polytechnique.fr
Subject: optimal search algorithm (fwd)
Message-ID:
MIME-Version: 1.0
Content-Type: TEXT/PLAIN; charset=US-ASCII
---------- Forwarded message ----------
Date: Mon, 5 Mar 2001 18:24:11 +0100
From: juergen@idsia.ch
To: connectionists@cs.cmu.edu
Subject: optimal search algorithm
Dear connectionists,
Recently Marcus Hutter has developed a very general, asymptotically
optimal search method that should be of interest to researchers in the
areas of reinforcement learning & neural networks & AI. The method is
not limited to machine learning problems though - I believe it will find
its way into general computer science textbooks. With the benefit of
hindsight I find it amazing that it has remained undiscovered up until
the turn of the century.
-------------------------------------------------------------------------
The fastest and shortest algorithm for all well-defined problems
Marcus Hutter, IDSIA
An algorithm M is described that solves any well-defined problem p as
quickly as the fastest algorithm computing a solution to p, save for
a factor of 5 and low-order additive terms. M optimally distributes
resources between the execution of provably correct p-solving programs
and an enumeration of all proofs, including relevant proofs of program
correctness and of time bounds on program runtimes. M avoids Blum's
speed-up theorem by ignoring programs without correctness proof. M has
broader applicability and can be faster than Levin's universal search, the
fastest method for inverting functions save for a large multiplicative
constant. An extension of Kolmogorov complexity and two novel natural
measures of function complexity are used to show that the most efficient
program computing some function f is also among the shortest programs
provably computing f.
ftp://ftp.idsia.ch/pub/techrep/IDSIA-16-00.ps.gz
-------------------------------------------------------------------------
Juergen Schmidhuber www.idsia.ch
______________________________________________________________________
To unsubscribe, write to kolmogorov-unsubscribe@listbot.com
Date: Sun, 01 Apr 2001 15:24:42 -0400
Subject: In MDL proof, why is K( Pr( * | H )) >= K( P( H ) )
From: Gary Warren King
Message-ID:
Mime-version: 1.0
Content-type: text/plain; charset="US-ASCII"
Content-transfer-encoding: 7bit
Sender: kolmogorov-return-27-16817135@listbot.com
In the proof of MDL from Bayes' rule, the claim is made that K( Pr( * | H ))
>= K( P( H ) ) (e.g., page 356 of Li and Vitanyi). Now I understand that Pr( * |
H) can be derived from P( H ). But
K( Pr( * | H ) ) = K( n ) and
K( P( H )) = K( m )
where n and m are the indexes of Pr( * | H ) and P( H ), respectively, in
the enumeration of enumerable semi-measures and I see reason that K( n ) is
necessarily >= K( m ). Even if it were the case for some enumeration, it
seems that a different enumeration could still give different results.
My thanks for any light shed on this subject.
Regards,
--
Gary Warren King
EKSL - University of Massachusetts, Amherst
______________________________________________________________________
To unsubscribe, write to kolmogorov-unsubscribe@listbot.com
Message-ID: <02c801c0bc28$12890fe0$2bbfb0c3@idsia.ch>
From: "Marcus Hutter"
Cc: "Gary Warren King"
References:
Subject: Re: In MDL proof, why is K( Pr( * | H )) >= K( P( H ) )
Date: Tue, 3 Apr 2001 12:23:05 +0200
Hello Gary,
In the book of Li&Vitanyi page 356 eq. (5.20) says
-K(Pr(*|H)) >= -K(H) + O(1)
Do you mean this inequality?
Then you missed the minus sign and the r.h.s. is K(H) not K(P(H)).
So K(Pr(*|H)) <= K(H) + O(1) which can be seen in the following way:
If you have a coding of H of length K(H) and a coding of Pr(*|*) of length
K(Pr(*|*))
you can easily construct a coding of Pr(*|H) of length K(H) + K(Pr(*|*)) +
O(1).
Since K(Pr(*|H)) is the length of the shortest coding of Pr(*|H) and by
assumption K(Pr(*|*))=O(1)
the inequality K(Pr(*|H)) <= K(H) + O(1) follows.
Regards,
Marcus
---
Dr. Marcus Hutter, IDSIA
Istituto Dalle Molle di Studi sull'Intelligenza Artificiale
Galleria 2 CH-6928 Manno(Lugano) - Switzerland
Phone: +41-91-6108667 Fax: +41-91-6108661
E-mail marcus@idsia.ch http://www.idsia.ch/~marcus
______________________________________________________________________
To unsubscribe, write to kolmogorov-unsubscribe@listbot.com
Date: Tue, 03 Apr 2001 08:51:27 -0400
Subject: Re: In MDL proof, why is K( Pr( * | H )) >= K( P( H ) )
From: Gary Warren King
Message-ID:
In-Reply-To: <02c801c0bc28$12890fe0$2bbfb0c3@idsia.ch>
Mime-version: 1.0
Content-type: text/plain; charset="US-ASCII"
Content-transfer-encoding: 7bit
Sender: kolmogorov-return-29-16817135@listbot.com
Thanks for your help.
> In the book of Li&Vitanyi page 356 eq. (5.20) says
> -K(Pr(*|H)) >= -K(H) + O(1)
> Do you mean this inequality?
I did mean this inequality, though I left off the minus signs through a lack
of memory and, hard to believe but true, I had not noticed until you pointed
it out that we were dealing with K(H) rather than K(P(H)). Sigh.
> ... by assumption K(Pr(*|*))=O(1)
I'm probably being dense, but why can we assume that K(Pr(*|*))=O(1)?
Rather, do you mean that Pr(*|*) is fixed based on the particular problem we
are dealing with and therefore is a constant with respect to D and H?
Thanks,
--
Gary Warren King
EKSL - University of Massachusetts, Amherst
______________________________________________________________________
To unsubscribe, write to kolmogorov-unsubscribe@listbot.com
Message-ID: <034101c0bc4d$049c7500$2bbfb0c3@idsia.ch>
From: "Marcus Hutter"
Cc:
References:
Subject: Re: In MDL proof, why is K( Pr( * | H )) >= K( P( H ) )
Date: Tue, 3 Apr 2001 16:47:34 +0200
> I'm probably being dense, but why can we assume that K(Pr(*|*))=O(1)?
> Rather, do you mean that Pr(*|*) is fixed based on the particular problem
we
> are dealing with and therefore is a constant with respect to D and H?
Exactly! Personally I would leave K(Pr(*|*)) and write O(1) only if it
really
depends on nothing, because the use of O() is sometimes very overloaded.
Maybe you find something interesting in my related works on universal
induction,
AI and fast search lgorithms.
http://www.hutter1.de/ai
"The Fastest and Shortest Algorithm for All Well-Defined Problems"
"General Loss Bounds for Universal Sequence Prediction"
"A Theory of Universal Artificial Intelligence based on Algorithmic
Complexity"
Marcus
---
Dr. Marcus Hutter, IDSIA
Istituto Dalle Molle di Studi sull'Intelligenza Artificiale
Galleria 2 CH-6928 Manno(Lugano) - Switzerland
Phone: +41-91-6108667 Fax: +41-91-6108661
E-mail marcus@idsia.ch http://www.idsia.ch/~marcus
______________________________________________________________________
To unsubscribe, write to kolmogorov-unsubscribe@listbot.com
From: "P. Vitanyi"
Received: (from paulv@localhost)
by kuip.cwi.nl (8.11.0/8.9.3/FLW-3.2C) id f33Ffg806778;
Tue, 3 Apr 2001 17:41:42 +0200
Date: Tue, 3 Apr 2001 17:41:42 +0200
Message-Id: <200104031541.f33Ffg806778@kuip.cwi.nl>
Subject: Re: In MDL proof, why is K( Pr( * | H )) >= K( P( H ) )
Cc: Paul.Vitanyi@cwi.nl
Sender: kolmogorov-return-31-16817135@listbot.com
>In the proof of MDL from Bayes' rule, the claim is made that K( Pr( * | H ))
>>= K( P( H ) ) (e.g., page 356 of Li and Vitanyi). Now I understand that Pr( * |
>H) can be derived from P( H ). But
Both p. 356 and p 357. There it is stated that
(*) K(Pr(*|H)) \leq K(H)+O(1).
Not: K(P(H)) as you write. Since Pr(*|H) and P(H) can be chosen
to have any relation to each other what you write would
not be true. Also, >>= is not used on those pages.
Th (*) follows since the knowledge of $H$ suffices to generate
the probability $Pr(*|H)$ (by definition)
>
Best, Paul
______________________________________________________________________
To unsubscribe, write to kolmogorov-unsubscribe@listbot.com
Date: Wed, 4 Apr 2001 09:42:29 -0600 (MDT)
From: Vladik Kreinovich
Reply-To: Vladik Kreinovich
Subject: FOM: Hilbert's 24th problem
FYI. It may be of interest because of the natural interpretation of simple
proofs as, e.g., shortest. Vladik
------------- Begin Forwarded Message -------------
From: teun@cs.vu.nl
Date: Wed, 04 Apr 2001 16:11:20 +0200
X-Accept-Language: nl, en
MIME-Version: 1.0
To: "fom@math.psu.edu"
CC: teun@cs.vu.nl, Ruediger Thiele
Subject: FOM: Hilbert's 24th problem
Content-Transfer-Encoding: 8bit
FOM subscribers may be interested in knowing that the German historian
Ruediger Thiele (Leipzig) has found an interesting text in Hilberts
handwritten notes in the Library in Goettingen. The handwriting, which
is here and there difficult to read, clearly shows that Hilbert
considered the possibility to add a 24th problem to his famous list of
23 problems. The problem is a FOM problem concerning the simplicity of
proofs.
This is my somewhat free translation of the first part of the text (the
complete German original follows at the end of this posting):
"As 24th problem in my lecture in Paris I wanted to pose the question:
criteria for the simplicity or give a proof of the greatest simplicty of
certain proofs. In general develop a theory of methods of proof in
mathematics. After all under given conditions there can be only one
simplest proof. In general when one has two proofs for one theorem, one
should not rest before one has reduced the proofs to each other or
understood exactly which different suppositions (and aids)are used in
the proofs: If one has two roads, one should not only go these roads or
look for new ones, but instead investigate the whole area between the
two roads."
In the last part of the text Hilbert refers to his work in invariant
theory where he allegedly made some first steps on the way to determine
the simplicity of a proof. It is not clear - at least not to me - what
he is precisely referring to
In the last sentence of the text there is a reference to geometry and
the possibility to determine the simplicity of a proof by counting the
number of steps in a proof.
Clearly there are many interesting questions to be asked.
1. Why did Hilbert not include this 24th problem.
2. What does this finding add to our knowledge of Hilbert's development
as for the foundations of mathematics?
3. To what extent is the problem solved?
4. Waht does the vague reference to his work in invariant theory (his
work on 'syzygies') precisely mean?
Etc.
The text of a lecture by Ruediger Thiele on this matter will be
published in Michael Kinyon (Ed.), History of Mathematics at the Dawn of
a New Millenium, Proceedings of a Conference of the Canadian Society for
History and Philosophy of Mathematics, McMaster University, Hamilton,
Ont. 2000.
I don't know when this book will appear. Moreover, I feel the existence
of a 24th problem deserves to be known more widely than it is now.
Thiele gave me permission for this posting. "I am tired now, so I agree
with everything", he mailed me. I won't ask twice.
Teun Koetsier
The German original:
"Als 24stes Problem in meinem Pariser Vortrag wollte ich die Frage
stellen: Kriterien für die Einfachheit bez. Beweis der groessten
Einfachheit von gewissen Beweisen fuehren. Ueberhaupt eine Theorie der
Beweismethoden in der Mathematik entwickeln. Es kann doch bei gegebenen
Voraussetzungen nur einen einfachsten Beweis geben. Ueberhaupt wenn man
für einen Satz 2 Beweise hat, so muss man nicht eher ruhen, als man die
beide aufeinander zurueckgeführt oder genau erkannt hat welche
verschiedenen Voraussetzungen (und Huelfsmittel) bei den Beweisen
benutzt werden: Wenn man 2 Wege hat, so muss man nicht bloss diese Wege
gehen oder neue suchen, sondern dann das ganze zwischen den beiden Wegen
liegende Gebiet erforschen. Ansaetze, die Einfachheit der Beweise zu
beurteilen, bieten meine Untersuchungen ueber Syzygien und Syzysien
zwischen Syzygien. Die Benutzung oder Kentniss einer Syzygie vereinfacht
den Beweis, dass eine gewisse Identitaet richtig ist, erheblich. Da
jeder Process des Addierens Anwendung des cummutativen Gesetzes der
Addition ist. - [Da] dies immer geometrischen Saetzen oder logischen
Schluessen entspricht, so kann man diese zaehlen und z.B. beim Beweis
bestimmter Saetze in der Elementargeometrie (Pythagoras, oder ueber
merkwuerdige Punkte im Dreieck), sehr wohl entscheiden,
welches der einfachste Beweis ist."
------------- End Forwarded Message -------------
______________________________________________________________________
To unsubscribe, write to kolmogorov-unsubscribe@listbot.com
From: salamon-ja@att.net
Cc: kolmogorov@listbot.com
Subject: Re: FOM: Hilbert's 24th problem
Date: Wed, 04 Apr 2001 18:04:31 +0000
X-Mailer: AT&T Message Center Version 1 (Mar 27 2001)
Message-Id: <20010404180432.SYMG21045.mtiwmhc25.worldnet.att.net@webmail.worldnet.att.net>
The shortest proof is not necessarily the 'simplest'
proof in mathematics.
A mathematical result may follow from a previous theorem
understood by only 10 mathematicians in the world. Hence
the whole proof may only be two steps.
The only way that shortness of a proof can be used as a
measure of simplicity is to convert the proof into
fundamental mathematical principles and THEN see how long
it is compared to another proof similarly converted.
In other words, both proofs being compared would have to
be derived from say, set theory, or some other base line
before they could be quantitavly compared to each other
vis a vis length or simplicity.
The concept of simplicity has to be rigorously defined if
it is to be used a measure of simplicity or indeed
Kolmogorov complexity (which amounts to the same thing).
That is probably why Hilbert did not propose this
problem, he probably realized that simplicity is not
easily defined and/or measurable.
Jacob Salamon
______________________________________________________________________
To unsubscribe, write to kolmogorov-unsubscribe@listbot.com
Message-Id: <200104041838.f34Icpi01294@cs.utep.edu>
Date: Wed, 4 Apr 2001 12:38:51 -0600 (MDT)
From: Vladik Kreinovich
Reply-To: Vladik Kreinovich
Subject: Re: FOM: Hilbert's 24th problem
Cc: kolmogorov@listbot.com
Dear Dr. Salamon,
Thanks a lot for your prompt and thoughtful comments.
I agree 100% with everything you say.
In particular, I agree that shortness is not necessarily an indication of
simplicity, this is why I said "e.g.,", I just wanted to explain why I feel
that this message is relevant for this community (since the original concept of
Kolmogorov complexity as the length of the shortest program - in a given
universal language - producing a given string).
I think that there are at least two reasons why this new discovery is relevant
to us.
First, the fact that Hilbert seriously considered adding this problem may be a
good PR boost for our community. I think you made a very reasonable suggestion
that a possible reason why Hilbert did not include this problem is that he
realized that defining complexity is not that easy. We know it is not that
easy, but we have an advantage of Kolmogorov complexity being a well-developed
area.
Second, it may give us food for thought for applications to proofs.
Vladik
______________________________________________________________________
To unsubscribe, write to kolmogorov-unsubscribe@listbot.com
Date: Thu, 5 Apr 2001 20:20:02 +1000 (EST)
Message-Id: <200104051020.UAA20287@fangorn.csse.monash.edu.au>
From: dld@mail.csse.monash.edu.au
Subject: Re: In MDL proof, why is K( Pr( * | H )) >= K( P( H ) )
Cc: kolmogorov@listbot.com
> From kolmogorov-return-30-25901626@listbot.com Wed Apr 4 16:06:00 2001
> Subject: Re: In MDL proof, why is K( Pr( * | H )) >= K( P( H ) )
> To: Kolmogorov Complexity
> Cc: kolmogorov@listbot.com
>
> > I'm probably being dense, but why can we assume that K(Pr(*|*))=O(1)?
> > Rather, do you mean that Pr(*|*) is fixed based on the particular problem
> we
> > are dealing with and therefore is a constant with respect to D and H?
>
> Exactly! Personally I would leave K(Pr(*|*)) and write O(1) only if it
> really
> depends on nothing, because the use of O() is sometimes very overloaded.
Dear Gary, Marcus, Paul et al.,
Hi.
I would invite interested parties unfamiliar with the work to look at:
C. S. Wallace and (me) D. L. Dowe (1999),
"Minimum
Message Length and Kolmogorov complexity",
Comp.
J., Vol 42, No. 4,
pp270-283;
and, of course, to other articles in that special issue of the Computer
Journal.
I agree with Marcus that the use of O(1) is over-loaded. As discussed in
that special issue of the Computer Journal, the choice of the (U)TM and the
corresponding O(1) is tantamount to the choice of Bayesian prior.
When doing real-world MML/MDL inference on a data-set, such an O(1) could be
bigger than the size of the data-set under consideration. Page 280 of
Wallace-Dowe (1999) contains Section 7. Terms 'of order one', elaborating on
this point.
Regards.
David.
http://www.csse.monash.edu.au/~dld
>
> Maybe you find something interesting in my related works on universal
> induction,
> AI and fast search lgorithms.
> http://www.hutter1.de/ai
>
> "The Fastest and Shortest Algorithm for All Well-Defined Problems"
> "General Loss Bounds for Universal Sequence Prediction"
> "A Theory of Universal Artificial Intelligence based on Algorithmic
> Complexity"
>
>
> Marcus
>
> ---
> Dr. Marcus Hutter, IDSIA
> Istituto Dalle Molle di Studi sull'Intelligenza Artificiale
> Galleria 2 CH-6928 Manno(Lugano) - Switzerland
> Phone: +41-91-6108667 Fax: +41-91-6108661
> E-mail marcus@idsia.ch http://www.idsia.ch/~marcus
______________________________________________________________________
To unsubscribe, write to kolmogorov-unsubscribe@listbot.com
From: "P. Vitanyi"
Received: (from paulv@localhost)
by kuip.cwi.nl (8.11.0/8.9.3/FLW-3.2C) id f3NG50g04453
for kolmogorov@listbot.com; Mon, 23 Apr 2001 18:05:00 +0200
Date: Mon, 23 Apr 2001 18:05:00 +0200
Message-Id: <200104231605.f3NG50g04453@kuip.cwi.nl>
Subject: Two new papers
Dear All,
This list seems like an effective way to spread info on new papers.
I propose we do that, and kick-off:
You may be interested in the following papers on the theory and
applications of Kolmogorov complexity:
http://xxx.lanl.gov/abs/math.PR/0006233
Algorithmic Statistics
Authors: Peter Gacs (Boston University), John Tromp (CWI), Paul Vitanyi (CWI and University of Amsterdam)
Comments: LaTeX, 22 pages, 1 figure, IEEE Trans. Inform. Theory, to appear. (Prel. Version: Proc. ALT2000, LNAI Vol. 1968, Springer Verlag,
Berlin, 2000, 219--233.)
Subj-class: Probability Theory; Data Analysis, Statistics and Probability; Learning
MSC-class: 62B05, 62B10, 68Q32, 68Q30, 60AXX, 68T04
While Kolmogorov complexity is the accepted absolute measure of information content of an individual finite object, a similarly
absolute notion is needed for the relation between an individual data sample and an individual model summarizing the information in
the data, for example, a finite set (or probability distribution) where the data sample typically came from. The statistical theory based
on such relations between individual objects can be called algorithmic statistics, in contrast to classical statistical theory that deals
with relations between probabilistic ensembles. We develop the algorithmic theory of statistic, sufficient statistic, and minimal
sufficient statistic. This theory is based on two-part codes consisting of the code for the statistic (the model summarizing the
regularity, the meaningful information, in the data) and the model-to-data code. In contrast to the situation in probabilistic statistical
theory, the algorithmic relation of (minimal) sufficiency is an absolute relation between the individual model and the individual data
sample. We distinguish implicit and explicit descriptions of the models. We give characterizations of algorithmic (Kolmogorov)
minimal sufficient statistic for all data samples for both description modes---in the explicit mode under some constraints. We also
strengthen and elaborate earlier results on the ``Kolmogorov structure function'' and ``absolutely non-stochastic objects''---those
rare objects for which the simplest models that summarize their relevant information (minimal sufficient statistics) are at least as
complex as the objects themselves. We demonstrate a close relation between the probabilistic notions and the algorithmic ones.
http://xxx.lanl.gov/abs/quant-ph/0102108
Quantum Kolmogorov Complexity Based on Classical Descriptions
Authors: Paul M.B. Vitanyi
Comments: 17 pages, LaTeX, final and extended version of quant-ph/9907035, IEEE Transactions on Information Theory, To appear
Subj-class: Quantum Physics; Computational Complexity; Logic
We develop a theory of the algorithmic information in bits contained in an individual pure quantum state. This extends classical
Kolmogorov complexity to the quantum domain retaining classical descriptions. Quantum Kolmogorov complexity coincides with
the classical Kolmogorov complexity on the classical domain. Quantum Kolmogorov complexity is upper bounded and can be
effectively approximated from above under certain conditions. With high probability a quantum object is incompressible. Upper- and
lower bounds of the quantum complexity of multiple copies of individual pure quantum states are derived and may shed some light on
the no-cloning properties of quantum states. In the quantum situation complexity is not sub-additive. We discuss some relations with
``no-cloning'' and ``approximate cloning'' properties.
http://xxx.lanl.gov/abs/cs.CV/0101036
The Generalized Universal Law of Generalization
Authors: Nick Chater (Univ. Warwick), Paul Vitanyi (CWI and Univ. Amsterdam)
Comments: 17 pages LaTeX, Submitted
Subj-class: Computer Vision and Pattern Recognition; Artificial Intelligence; Physics and Society; Probability Theory
ACM-class: J.4
It has been argued by Shepard that there is a robust psychological law that relates the distance between a pair of items in psychological
space and the probability that they will be confused with each other. Specifically, the probability of confusion is a negative
exponential function of the distance between the pair of items. In experimental contexts, distance is typically defined in terms of a
multidimensional Euclidean space-but this assumption seems unlikely to hold for complex stimuli. We show that, nonetheless, the
Universal Law of Generalization can be derived in the more complex setting of arbitrary stimuli, using a much more universal
measure of distance. This universal distance is defined as the length of the shortest program that transforms the representations of the
two items of interest into one another: the algorithmic information distance. It is universal in the sense that it minorizes every
computable distance: it is the smallest computable distance. We show that the universal law of generalization holds with probability
going to one-provided the confusion probabilities are computable. We also give a mathematically more appealing form
http://xxx.lanl.gov/abs/physics/0005062
Applying MDL to Learning Best Model Granularity
Authors: Qiong Gao (Chinese Academy of Sciences), Ming Li (University of California, Santa Barbara), Paul Vitanyi (CWI and University of
Amsterdam)
Comments: LaTeX, 32 pages, 5 figures. Artificial Intelligence journal, To appear
Subj-class: Data Analysis, Statistics and Probability; Artificial Intelligence; Computer Vision and Pattern Recognition
The Minimum Description Length (MDL) principle is solidly based on a provably ideal method of inference using Kolmogorov
complexity. We test how the theory behaves in practice on a general problem in model selection: that of learning the best model
granularity. The performance of a model depends critically on the granularity, for example the choice of precision of the parameters.
Too high precision generally involves modeling of accidental noise and too low precision may lead to confusion of models that should
be distinguished. This precision is often determined ad hoc. In MDL the best model is the one that most compresses a two-part code
of the data set: this embodies ``Occam's Razor.'' In two quite different experimental settings the theoretical value determined using
MDL coincides with the best value found experimentally. In the first experiment the task is to recognize isolated handwritten
characters in one subject's handwriting, irrespective of size and orientation. Based on a new modification of elastic matching, using
multiple prototypes per character, the optimal prediction rate is predicted for the learned parameter (length of sampling interval)
considered most likely by MDL, which is shown to coincide with the best value found experimentally. In the second experiment the
task is to model a robot arm with two degrees of freedom using a three layer feed-forward neural network where we need to
determine the number of nodes in the hidden layer giving best modeling performance. The optimal model (the one that extrapolizes
best on unseen examples) is predicted for the number of nodes in the hidden layer considered most likely by MDL, which again is
found to coincide with the best value found experimentally.
______________________________________________________________________
To unsubscribe, write to kolmogorov-unsubscribe@listbot.com
Date: Wed, 2 May 2001 17:56:18 +1000 (EST)
Message-Id: <200105020756.RAA02281@fangorn.csse.monash.edu.au>
From: dld@mail.csse.monash.edu.au
Subject: Monash: 2 x Research Fellow in Computational Data Analysis & Modelling
Dear All,
In Wednesday 2 May 2001's The Australian (a newspaper well known here in
Australia :-) ), there is an ad in the Higher Education section, page 33, for
"Research Fellow in Computational Data Analysis and Modelling (2 positions)".
The advertisement says School of Comp. Sci. and Softw. Eng. (CSSE),
Monash University, with a salary between Aus$35,594p.a. and Aus$60,382p.a.
Further info: Leeanne Evans, leeanne@csse.monash.edu.au +61 3 9905-5200.
The positions are available immediately, and are initially for 1 year with
the prospect of renewal.
Minimum Message Length (MML) was developed by Chris Wallace of Monash in
(Wallace and Boulton, 1968), and there is still a very strong world-class
MML research group here consisting of Chris Wallace, yours truly (David Dowe)
and others. The relation between MML and Kolmogorov complexity has been
studied by a number of authors, including Chris Wallace and myself, Paul
Vitanyi and colleagues, and others.
If you have any questions of a technical nature, please contact me.
If you have any questions of an administrative nature, please contact
Leeanne Evans, leeanne@csse.monash.edu.au +61 3 9905-5200.
Ref. No.: A012737 Deadline: 16th May 2001.
Regards to all. - David.
Dr David Dowe, School of Computer Science and Software Eng.,
Monash University, Clayton, Victoria 3168, Australia
dld@cs.monash.edu.au Fax:+61 3 9905-5146 http://www.csse.monash.edu.au/~dld/
http://www.csse.monash.edu.au/~dld/MML.html
______________________________________________________________________
To unsubscribe, write to kolmogorov-unsubscribe@listbot.com
Date: Mon, 14 May 2001 17:49:11 +0200 (MET DST)
From: Sophie.Laplante@lri.fr (Sophie Laplante)
Message-Id: <200105141549.RAA01138@sun-algo.lri.fr>
Subject: TAI 2001 Workshop Announcement
Sender: kolmogorov-return-38-16817135@listbot.com
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
TAI 2001 ANNOUNCEMENT - TAI 2001 ANNOUNCEMENT
We apologize for multiple copies
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
**********************************************
* *
* TAI 2001 *
* *
* September 24-26, 2001 *
* Porquerolles, Hyeres - France *
* *
**********************************************
Aim
===
The conference is organized with two main purposes. First, to gather
researchers working in different areas in order to enforce and promote
international collaborations and spread knowledge and acquired abilities.
Second, to attract the unacquainted public in order to enlarge the
community working on Kolmogorov complexity and related themes.
The first objective is achieved by invited talks made by internationally
renowned specialists, and by a long series of shorter talks so as to
provide everybody the opportunity of presenting their ideas and results.
The second objective is achieved by tutorial/introductory talks giving the
state-of-the-art on fundamental aspects of Kolmogorov complexity, and at
the same time providing "hooks" for beginning a research program on the
theme. These will be particularly adapted to a public of young researchers.
TAI participants are also invited to participate in the conference
AUTOMATA 2001. More information can be found at the address:
http://www.cmi.univ-mrs.fr/~eforment/confs/aut2001/aut_main.html
Support
=======
The conference is supported by:
- Universite' de Provence
Topics
======
Topics of interest include (but are not limited to): Kolmogorov complexity
and related notions, algorithmic information theory and inductive inference,
quantum Kolmogorov complexity, Kolmogorov complexity and dynamical systems,
Kolmogorov complexity in bioinformatics.
Submission
==========
Researchers that plan to give a talk are invited to fill in the submission
form on the conference web site. There is room for about twenty short talks
of 30 minutes. If, for any reason, you cannot send the submission form
electronically, please send the title, author and a short abstract to
eforment@cmi.univ-mrs.fr or via surface mail to
Enrico Formenti
CMI - Centre de mathematique et informatique
39, rue Joliot-Curie 13453 Marseille cedex 13
France.
Program Committee
=================
B. Durand (Marseille), F. Denis (Lille), E. Formenti (Marseille),
S. Grigorieff (Paris), S. Laplante (Paris), J.-Y. Marion (Nancy)
Organizing Committee
====================
B. Durand (Marseille), J. Cervelle (Marseille), E. Formenti (Marseille),
S. Laplante (Paris)
Important dates
===============
REGISTRATION: May 25, 2001. (Very important!)
It is very important for us to know the number of expected participants
precisely since we have to pay in advance for the accommodations. If you
are planning to come please register as soon as possible. (Payment
for housing will be made at a later date.)
Submission deadline: June 30, 2001 <---- DEADLINE
Prices
======
Prices include breakfast, lunch, dinner and accommodations. There is no
registration fee. There are 15 double rooms and 10 single rooms:
400FF per person, per night (double room)
450FF per person, per night (single room)
Prices may decrease depending on pending institutional financial support.
Accommodations can also be arranged at hotels on the island. Please
see http://www.porquerolles.net/sejourner/index_sejourner.htm for a
list of alternate accommodations.
Further Information
====================
E-mail: mailto:eforment@cmi.univ-mrs.fr
WWW: http://www.cmi.univ-mrs.fr/~eforment/confs/tai2001/tai_main.html
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
TAI 2001 ANNOUNCEMENT - TAI 2001 ANNOUNCEMENT
We apologize for multiple copies
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
______________________________________________________________________
To unsubscribe, write to kolmogorov-unsubscribe@listbot.com
Date: Tue, 15 May 2001 17:14:38 +0200 (CEST)
From: Olivier Bousquet
X-Sender: bousquet@georgev.polytechnique.fr
Subject: Algorithmic Complexity and Genetic Programming
Message-ID:
MIME-Version: 1.0
Content-Type: TEXT/PLAIN; charset=US-ASCII
Dear all,
I would like to draw the attention of people interested in Kolmogorov
Complexity to the following message posted by Thomas English on the
Genetic Algorithms mailing list (see at the end of this message).
In broad terms, the goal of Genetic Programming is to identify a
computable function by stochastic search in the space of all programs.
The search is guided by the 'fitness' which is a measure of how 'close' a
program is to the target (e.g. how close their ouputs are on a specific
input or on a set of inputs).
The way the space is searched depends on 'genetic operators' which
transform single programs or pairs of programs into new programs.
Of course, one would like to identify the target faster than using pure
random search in the huge space of all programs.
Obviously, this can only be achieved if one has some prior knowledge about
where the target lies in the space (e.g. the maximum length of the
corresponding program). This is known as the No Free Lunch problem (NFL).
It thus seems to me that Genetic Programming is indeed related to Levin's
search (see e.g. a recent paper by Marcus Hutter - reference in an earlier
post) and that some of the issues in Genetic Programming could be
addressed by Algorithmic Complexity as Thomas English already noticed.
I thus hope to see in the near future a growing interaction between these
fields.
Regards,
Olivier Bousquet.
----------
Sender: Thomas English
Subject: NFL and algorithmic complexity
Considerations of algorithmic complexity in "no free lunch" (NFL) depend
critically upon whether one is treating optimization or learning [1].
Here I limit my comments to optimization.
It has been suggested that bounding the algorithmic complexity of test
functions results in a "free nibble"--i.e., a small superiority for some
optimizers over others. In reality, the bound does not suffice. NFL is a
property of the distribution of functions, and can be exhibited even if
the distribution is defined on a set of functions of very low
algorithmic complexity. For instance, there is no free lunch if
functions are drawn uniformly from the set of all "needle-in-a-haystack"
functions mapping exactly one domain element to 1 and all other domain
elements to 0 [2]. The functions in this set are of very low algorithmic
complexity, so no complexity bound of practical interest rules out the
possibility of an NFL distribution of test functions.
It is interesting to note that the uniform distribution on
needle-in-a-haystack functions is both the hardest to maximize and the
easiest (of all distributions excluding constant functions) to minimize.
That is, a single NFL distribution can be exceedingly benign or
perverse, depending upon the optimization objective.
Thomas English
Tom.English@ieee.org
[1] English, T. M. 2000. "Optimization is easy and learning is hard in
the typical function," Proceedings of 2000 Congress on Evolutionary
Computation: CEC00, pp. 924-931
(http://members.door.net/tmenglish/cec2000.pdf).
[2] English, T. M. 1996. "Evaluation of evolutionary and genetic
optimizers: No free lunch," Evolutionary Programming V: Proc. of the
Fifth Ann. Conf. on Evolutionary Programming, L. J. Fogel, P. J.
Angeline, and T. Baeck, eds., pp 163-169.
______________________________________________________________________
To unsubscribe, write to kolmogorov-unsubscribe@listbot.com
From: Paul Cockshott
To: Olivier Bousquet
Subject: Re: Algorithmic Complexity and Genetic Programming
Date: Wed, 16 May 2001 12:13:58 +0100
References:
Message-Id: <0105161215190L.05183@pickersgill>
When doing experiments in using genetic programming to predict economic
time series, I have build the Kolgomorov cost of the formula produced into
the fitness function- otherwise one can end up with arbitrarily complex
formulae which match the data arbitrarily closely.
Paul Cockshott, University of Glasgow, Glasgow, Scotland
0141 330 3125 mobile:07946 476966
paul@cockshott.com
http://www.dcs.gla.ac.uk/people/personal/wpc/
http://www.dcs.gla.ac.uk/~wpc/reports/index.html
_____________________________________________________________________
To unsubscribe, write to kolmogorov-unsubscribe@listbot.com
From: "Bush, Stephen F (CRD)"
Subject: Software Library of Kolmogorov Complexity Related Functions
Date: Wed, 23 May 2001 15:25:56 -0400
X-Mailer: Internet Mail Service (5.5.2653.19)
Sender: kolmogorov-return-40-16817135@listbot.com
Is anyone aware of a software library of Kolmogorov Complexity related functions (preferably Java
packages)?
I'm thinking in terms of a package of objects that use various techniques to estimate K.
Any pointers would be appreciated.
Thanks,
Steve Bush
______________________________________________________________________
To unsubscribe, write to kolmogorov-unsubscribe@listbot.com
Date: Sat, 26 May 2001 10:58:05 +0200 (MEST)
Message-Id: <200105260858.f4Q8w5T29831@lri.lri.fr>
From: Sophie.Laplante@lri.fr
Subject: TAI 2001 registration deadline extended
Sender: kolmogorov-return-42-16817135@listbot.com
Kolmogorov Complexity - http://www.cmap.polytechnique.fr/~bousquet/kolmogorov.html
--------------------------- ListBot Sponsor --------------------------
Get a low APR NextCard Visa in 30 seconds!
1. Fill in the brief application
2. Receive approval decision within 30 seconds
3. Get rates as low as 2.99% Intro or 9.99% Ongoing APR and no
annual fee!
Apply NOW!
http://www.bcentral.com/listbot/NextCard
----------------------------------------------------------------------
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
TAI 2001
Workshop on Algorithmic Information Theory
REGISTRATION DEADLINE
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
**********************************************
* TAI 2001 *
* *
* September 24-26, 2001 *
* Porquerolles, Hyeres - France *
**********************************************
TAI 2001 is the fifth annual workshop on Algorithmic Information
Theory and Kolmogorov complexity.
The registration deadline has been extended to JUNE 2, 2001.
Please see the web site below for more information.
E-mail: mailto:eforment@cmi.univ-mrs.fr
WWW: http://www.cmi.univ-mrs.fr/~eforment/confs/tai2001/tai_main.html
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
TAI 2001 REGISTRATION DEADLINE
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
______________________________________________________________________
To unsubscribe, write to kolmogorov-unsubscribe@listbot.com
From: "P. Vitanyi"
Received: (from paulv@localhost)
by kuip.cwi.nl (8.11.0/8.9.3/FLW-3.2C) id f4RMdnY12247
for kolmogorov@listbot.com; Mon, 28 May 2001 00:39:49 +0200
Date: Mon, 28 May 2001 00:39:49 +0200
Message-Id: <200105272239.f4RMdnY12247@kuip.cwi.nl>
Subject: Your help for 3rd Edition Li-Vitanyi is sollicited
Dear All,
We are preparing the 3rd Edition of "Ming Li, Paul Vitanyi, An Introduction
to Kolmogorov Complexity and Its Applications, Springer-Verlag, 1997 (2nd Ed.).
The area has meanwhile increased to such an extend that it is impossible
(at least for us) to keep track of the publications and results.
We would like to ask you to send us the following information about you
and your co-authors and immediate collaegues:
a) A list of publications from 1997 onwards (or not appearing in the 1997 edition)
b) A short writeup of the major results as a LaTeX fragment or plain text, not exceeding
1-2 pages in total. Roughly, in the style of the exercises in the earlier editions.
c) A list of *final* publications in technical journals and colections having appeared
from 1997 onwards (or not bibliographed in the 1997 edition).
We will probably include all final journal publications in the bibliography, and select
results to either expand in the main text or enter as exercises. Note that it will
probably not be possible to enter everything anyway, since the current edition
is already too thick. We will have to severely select material: a simple, short, and
instantly understandable writeup from you can help us make a good selection.
Please send the above by email to Paul.Vitanyi@cwi.nl
Thank you very much in advance for your help,
Ming Li
Paul Vitanyi
______________________________________________________________________
To unsubscribe, write to kolmogorov-unsubscribe@listbot.com
Date: Wed, 30 May 2001 16:19:17 +0200
Subject: Question on Monotonic Complexity
From: "Gerard Pinson"
Mime-version: 1.0
Sender: kolmogorov-return-44-16817135@listbot.com
Dear All,
Let KM the so-called "monotonic complexity" . See Uspensky and Shen, Math.
Systems Theory, 29, 271-292 (1996) See also Li & Vitanyi, definition 4.5.9
in § 4.5.4 ("monotone complexity", marked Km).
KM is defined in the usual way, as the length of the shortest description :
KM(x) = min{l(p):U(p)=x}
where p and x are both prefixed.
Questions :
Is Km additive ? ie : Km(x,y) = Km(x) + Km(y|x) ?
Subadditive ? ie : Km(x,y) =< Km(x) + Km(y) ?
Symmetric ? ie : Km(x:y) = Km(y:x) ?
Regards,
Gerard Pinson
pinson.g@wanadoo.fr
______________________________________________________________________
To unsubscribe, write to kolmogorov-unsubscribe@listbot.com
From: "P. Vitanyi"
Received: (from paulv@localhost)
by kuip.cwi.nl (8.11.0/8.9.3/FLW-3.2C) id f4VJ1l312565;
Thu, 31 May 2001 21:01:47 +0200
Date: Thu, 31 May 2001 21:01:47 +0200
Message-Id: <200105311901.f4VJ1l312565@kuip.cwi.nl>
Subject: Re: Question on Monotonic Complexity
Sender: kolmogorov-return-45-16817135@listbot.com
All these properties hold for Km up to an additive log error term.
The additive property has a log error term that cannot be improved.
The subadditive property follows from the additive one.
Symmetry Im(x:y)=Im(y:x) up to log additve term follows from
the additive property.
Cheers, Paul
______________________________________________________________________
To unsubscribe, write to kolmogorov-unsubscribe@listbot.com
From: Allan Erskine
Subject: query: learning rates for high K-complexity solutions
Date: Mon, 11 Jun 2001 00:09:57 +0100
Message-ID: <00a901c0f202$7a8f4540$ea4dfea9@spukny>
MIME-Version: 1.0
Greetings all,
In lieu of the recent post on genetic programming and Kolmogorov complexity,
I have a query along slightly different lines, and was hoping an expert
could advise.
I'm researching relationships between Kolmogorov (K-) complexity and machine
learning; the advice I seek is on the worthiness of my research proposal
(e.g. don't bother/done before in [x], or good idea/try looking at [y]).
Short version of research proposal is to investigate the following (assuming
we've phrased some learning problem in a suitable way for this analysis):
Identify difficult learning problems with those having high K-complexity for
any (representation of a) near-optimal solution.
Suppose we're told in advance that any optimal solution to a learning
problem is going to have high K-complexity, say k, and suppose further that
this is much larger than the size n of the learning algorithm we intend to
use, i.e. n << k.
Does this imposes the constraint of either exponential learning time in
poly(k-n), or sub-exponential learning time in poly(k-n) but only at the
expense of the learning environment "giving the game away", ie somehow
pinpointing the solution out of all 2^k programs for complexity k solutions?
Or shorter still: I propose to research lower bounds on the running time of
certain learning algorithms based on the minimum complexity k of any
near-optimal solution and the size n of the learning algorithm.
Many thanks in advance of any help or comments,
Allan Erskine
(PhD student, UCL)
______________________________________________________________________
To unsubscribe, write to kolmogorov-unsubscribe@listbot.com
From: "P. Vitanyi"
Received: (from paulv@localhost)
by kuip.cwi.nl (8.11.0/8.9.3/FLW-3.2C) id f5BG8Om09196;
Mon, 11 Jun 2001 18:08:24 +0200
Date: Mon, 11 Jun 2001 18:08:24 +0200
Message-Id: <200106111608.f5BG8Om09196@kuip.cwi.nl>
Subject: Re: query: learning rates for high K-complexity solutions
Sender: kolmogorov-return-47-16817135@listbot.com
That of course makes sense. This idea is formalized in Exercise
5.3.7 (page 338) of Li-Vitanyi 2nd edition of 1997,
and more extensively in Li-Vitanyi 1st edition of 1993
Section 5.4.4 on pp 294-295. There it is done without time
bounds (in terms of plain complexity). If you do the same thing
in terrms of timebounded KC then you will obtain your
goal below. The matter is a little subtle, and I believe there
can be many ramifications.
Cheers, Paul Vitanyi
______________________________________________________________________
To unsubscribe, write to kolmogorov-unsubscribe@listbot.com
From: Allan Erskine
Subject: RE: query: learning rates for high K-complexity solutions
Date: Fri, 15 Jun 2001 04:30:50 +0100
Message-ID: <000b01c0f54b$97023330$ea4dfea9@spukny>
MIME-Version: 1.0
Thanks Paul for the encouraging reply and for the lead - as it happens my
copy of LV was lying open very near that exercise.
I'm definitely finding the matter more than a little subtle; for anyone that
is interested though, I have found reinforcement learning to be a nice area
to study in regards to learning rates vs solution complexity (see appended
note).
I'll post again if I make progress in this direction, or have further
queries.
Thanks again,
Allan
--- note on RL
Briefly, the process of reinforcement learning can be viewed as a sequence
of interactions with an environment having the following structure (at time
step i):
- the environment E takes on some state s_i
- based on the state s_i, the reinforcement learner chooses an action, a_i
- the learner receives a scalar reward r_i depending on the utility of its
action
Calling a triple (s,a,r) an interaction, reinforcement learning can be
viewed as a stochastic process RL=I_0 I_1 I_2... of interactions, governed
by some distribution P(I_0 I_1 ... I_n) that a certain sequence of
interactions is observed (the distribution encompasses the learner, the
environment, and the reward signal).
But with a prefix-free binary encoding of the triples I, we can also view
reinforcement learning as governed by a distribution \mu(B_0 B_1 ... B_n)
over binary strings. This leaves us in a similar situation to Solmonoff's
formulation of induction which feels like a healthy place to begin studying
notions of program size vs learning rates.
Of course, here's where the subtleties begin, the first being that we need
to separate out the learners contribution to the overall distribution \mu...
All comments/queries most welcome!
______________________________________________________________________
To unsubscribe, write to kolmogorov-unsubscribe@listbot.com
From: "Marcus Hutter"
References: <000b01c0f54b$97023330$ea4dfea9@spukny>
Subject: Re: query: learning rates for high K-complexity solutions
Date: Fri, 15 Jun 2001 10:25:41 +0200
MIME-Version: 1.0
Content-Type: text/plain;
charset="iso-8859-1"
Content-Transfer-Encoding: 7bit
Hello Allan Erskine,
The community of researchers trying to unify reinforecment learning
with Solomonoff's algorithmic probability seems to increase.
I have already written up a lot of things.
Probably you know already, but just in case not:
See "A Universal Theory of Artificial Intelligence based on Algorithmic
Probability and Sequential Decisions"
ftp://ftp.idsia.ch/pub/techrep/IDSIA-14-00.ps.gz (8 pages)
http://www.hutter1.de/ai/pkcunai.htm (62 pages)
James Rogers tries similar things
http://www.sysopmind.com/archive-sl4/0104/0253.html
and Juergen Schmidhuber is around already for a long time :-)
http://www.idsia.ch/~juergen
Your final comment is very important:
> Of course, here's where the subtleties begin, the first being that we need
> to separate out the learners contribution to the overall distribution \mu...
I stumbled over this problem very early in my attempts with the following
solution:
The problem with P(I_0 I_1 ... I_n) is that it contains it's own action.
If you use Solomonoffs universal distribution M for P the following happens.
M is only enumerable, but not computable, the optimal action is then only
approximable, but not enumerable, hence the agent itself does not satisfy
the computability assumption, and is hence itself not dominated by M.
(The problems show also up if you scale everything down to time-bounded
distributions and agents)
On the other hand, domination is important for convergence M->mu.
The way to avoid this is to consider from the very beginning
conditional probabilities P(s_1 r_1...s_n r_n|a_1..a_n), since
probabilities of a_i's are not needed anyway.
M can be defined as the 2^-l(p) weighted sum over all programs p computing
s_1 r_1...s_n r_n when a_1..a_n is on the INPUT tape.
See http://www.hutter1.de/ai/pkcunai.htm for details.
Cheers
Marcus
---
Dr. Marcus Hutter, IDSIA
Istituto Dalle Molle di Studi sull'Intelligenza Artificiale
Galleria 2 CH-6928 Manno(Lugano) - Switzerland
Phone: +41-91-6108667 Fax: +41-91-6108661
E-mail marcus@idsia.ch http://www.idsia.ch/~marcus
______________________________________________________________________
To unsubscribe, write to kolmogorov-unsubscribe@listbot.com
From: Allan Erskine
Subject: RE: query: learning rates for high K-complexity solutions
Date: Wed, 20 Jun 2001 19:39:28 +0100
Message-ID: <004f01c0f9b8$58ad1250$ea4dfea9@spukny>
MIME-Version: 1.0
Thanks, Marcus, for the interesting mail...I hadn't discovered your recent
generalisations to Solmonoff's induction. So far I've only had time to skim
through your papers - I'm looking forward to going over them in more detail.
I'm using the O(n2^l) bound you (and others) have as the departure point for
my investigations.
Loosely, if it takes a program of at least size k to describe the behaviour
of an agent behaving optimally over n time steps (ie if the algorithmic
minimal statistic for any range of optimal n-length behaviours is of minimum
size k), then that program can be found by coin tossing or by enumeration in
time O(n2^k). So here k represents the amount of structure, or
sophistication, in any optimal solution. Difficult learning problems can be
identified with those requiring highly structured solutions, and they ought
to be hard to find.
It makes sense to consider O(n2^k) as the zero point for consideration of
learning algorithms then - any algorithm running in this time to discover
high k structured solutions should be considered "as bad as it gets". My
hypothesis is that all constant size algorithms are "as bad as it gets" in a
certain sense, and I'm trying to work to give a rigorous characterisation of
algorithms considered heuristically by Solmonoff, Minsky, Lenat etc that can
"grow" to exploit structure. My goal having characterised "fixed" vs
"growing" algorithms is to give _any_ sort of existence proof of some
"growing" algorithm minorising all "fixed" algorithms in running time.
All I can say with certainty for now though is I'm finding the matter very
slippery indeed, and of course, all help and comments greatly appreciated!
Allan
______________________________________________________________________
To unsubscribe, write to kolmogorov-unsubscribe@listbot.com
From: dld@mail.csse.monash.edu.au
Received: (from dld@localhost)
by fangorn.csse.monash.edu.au (8.9.3+Sun/8.9.3) id AAA11244
for bousquet@cmapx.polytechnique.fr; Mon, 1 Oct 2001 00:34:33 +1000 (EST)
Date: Mon, 1 Oct 2001 00:34:33 +1000 (EST)
Message-Id: <200109301434.AAA11244@fangorn.csse.monash.edu.au>
To: bousquet@cmapx.polytechnique.fr
Subject: Research Fellows in Information Technology
Dear Cher Olivier,
Hi.
Could you please forward this ad to the Kolmogorov e-mail list?
Ta.
David.
Research Fellows in Information Technology
------------------------------------------
As part of its goal to cement its position as Australia's premier academic
institution for information technology research, the Faculty of Information
Technology of Monash University has established the IT Research Fellowship
Scheme to attract outstanding early career researchers in any field of
information technology to Monash University.
The positions are for up to 3 years and the salary is in the Research Fellow
Level B range Aus$50,847 - Aus$60,382 per annum depending on experience. In
addition Fellowship holders are eligible for a Research Support Grants which
can be up to Aus$10,000 per annum depending on the needs of the research
program. Return airfares may be available for successful interstate/overseas
candidates and their dependants.
The Faculty of Information Technology was created in 1990. It is Australia's
largest faculty exclusively devoted to information technology with 190
academics. It has an enviable research reputation in virtually all aspects of
information technology and produces more research postgraduates than any
other Australian university. Research in the Faculty is centred around:
artificial intelligence; machine learning; statistics and Minimum Message
Length (developed initially at Monash by Wallace and Boulton in 1968, and
surveyed by Wallace and Dowe in the Computer Journal, Vol. 42, No. 4,
pp270-283, 1999); intelligent decision support; operations research;
distributed systems, mobile computing and software engineering;
information systems and information management; digital communications and
multimedia; and computer education.
Location: Appointees may be based at Clayton (where the MML research group
is), Caulfield, Peninsula, Gippsland or Berwick campuses.
Contact: Further information and particulars of the application procedure may
be obtained from A/Prof. Kim Marriott, Associate Dean of Research, Faculty of
Information Technology telephone +61 3 9905 5525, facsimile
+61 3 9905 5146, e-mail: adr@infotech.monash.edu.au
Applications: Ms M Jones-Roberts, Manager, Research Services, Faculty of
Information Technology, P.O. Box 197, Caulfield East, Vic 3145 by 19/10/01.
Quote Ref No. A013055 and include a completed application form.
The position description, selection criteria, background information and
application form are available at
http://www.infotech.monash.edu.au/sta_pv_research.html
From bousquet@cmapx.polytechnique.fr Wed Oct 10 09:44:03 2001 +0200
Status: R
X-Status:
X-Keywords:
Return-Path:
Received: from x-mailer.polytechnique.fr (x-mailer.polytechnique.fr [129.104.35.1])
by barbes.polytechnique.fr (8.9.3/x.y.z) with ESMTP id JAA15169
for ; Wed, 10 Oct 2001 09:44:02 +0200
Received: from mlist-0.sp.cs.cmu.edu (MLIST-0.SP.CS.CMU.EDU [128.2.198.105])
by x-mailer.polytechnique.fr (x.y.z/x.y.z) with SMTP id JAA11367
for ; Wed, 10 Oct 2001 09:44:12 +0200 (MET DST)
Received: from mlist-0.sp.cs.cmu.edu by mlist-0.sp.cs.cmu.edu id aa23671;
10 Oct 2001 3:14 EDT
Received: from AMMON.BOLTZ.CS.CMU.EDU by mlist-0.sp.cs.cmu.edu id aa23667;
10 Oct 2001 3:08 EDT
Received: from ammon.boltz.cs.cmu.edu by ammon.boltz.cs.cmu.edu id aa16660;
10 Oct 2001 3:07 EDT
Received: from RI.CMU.EDU by ux3.sp.cs.cmu.edu id aa12997; 10 Oct 2001 1:52 EDT
Received: from bruce.csse.monash.edu.au by ri.cmu.edu id aa12076;
10 Oct 2001 1:51 EDT
Received: (from dld@localhost)
by bruce.csse.monash.edu.au (8.10.2+Sun/8.10.2) id f9A5nSG27946;
Wed, 10 Oct 2001 15:49:28 +1000 (EST)
Date: Wed, 10 Oct 2001 15:49:28 +1000 (EST)
From: David L Dowe
Message-Id: <200110100549.f9A5nSG27946@bruce.csse.monash.edu.au>
Subject: Research Fellows in Machine Learning at Monash University
Dear all,
Apologies for any cross-posting.
Below is an advertisement for Research Fellows in Information Technology
at Monash University. If you are interested in machine learning and
machine learning by Minimum Message Length (MML), then I warmly invite
you to read the advertisement below for Research Fellows, for which the
application deadline is Friday 19th October.
David Dowe.
http://www.csse.monash.edu.au/~dld/MML.html
______________________________________________________________________
To unsubscribe, write to kolmogorov-unsubscribe@listbot.com
Date: Thu, 25 Oct 2001 21:13:59 +1000 (EST)
From: David L Dowe
Message-Id: <200110251113.f9PBDx602847@bruce.csse.monash.edu.au>
To: bousquet@cmapx.polytechnique.fr
Subject: Adv. for Lecturer/Senior Lecturer position (in MML) at Monash
> From michelle.ketchen@csse.monash.edu.au Thu Oct 25 20:53:05 2001
> Subject: Lecturer/Senior Lecturer
> To: allcsse@monash.edu.au
>
> This is a multi-part message in MIME format.
>
> --Boundary_(ID_COe8esqBqWVP0uFKOFDyMQ)
> Content-type: text/plain; charset=iso-8859-1
> Content-transfer-encoding: 7BIT
>
> Hi everyone,
>
> The advetisement below will be appearing in the Age and Australian next week.
>
> Thanks
> Michelle
_________________________________________________
Lecturer/Senior Lecturer
Up to 4 positions available
School of Computer Science and Software Engineering
The School of Computer Science and Software Engineering is one of five schools within the Monash University Faculty of Information Technology.
The School employs over 60 academic staff on the Caulfield and Clayton campuses. Research interests within the school encompass software engineering; mobile and distributed systems; computer education; database systems and information retrieval; distributed, parallel and
mobile computing; graphics and image processing; logic and theory; artificial intelligence and machine learning primarily by Minimum Message Length (MML); communications and digital signal processing; digital systems hardware and architecture. The school
offers a range of undergraduate and graduate programs including the Bachelor of Computing, Computer Science, Software Engineering and Digital Systems and has a strong honours and PhD program.
Applications for the position of Lecturer should have a PhD and a strong research record, with a proven ability to teach at both undergraduate and postgraduate level. Teaching areas in particular demand include Parallel and Cluster Computing, Software Engineering, Distributed Object Technology, Data Communications and Programming Languages. Applicants for a senior lectureship must also demonstrate proven leadership
abilities in areas including academic administration, research management and industry collaboration.
CSSE Web site: http://www.csse.monash.edu.au/
Appointment will be made at a level appropriate to the successful applicant's qualifications, experience and in accordance with classification standards for each level.
The Benefits: $50,847 - $60,382 p.a. Lecturer
$62,288 - $71,822 p.a. Senior Lecturer
Location: Positions are available at Caulfield and Clayton. Teaching on both the Caulfield and Clayton campuses as required.
Contact: Ms Michelle Ketchen, Tel (03) 9903 2488 or email: michelle.ketchen@csse.monash.edu.au for inquiries and a copy of the position description.
Applications: Ms M Ketchen, Administration Manager, CSSE, PO Box 197, Caulfield East, VIC 3145 by 28/11/01. Quote Ref No. A013148 and
include curriculum vitae and the names (with phone and facsimile numbers) of three referees in your application.
"Monash University reserves the right to make multiple appointments in regard to each advertised position".
An Equal Opportunity Employer.
______________________________________________________________________
To unsubscribe, write to kolmogorov-unsubscribe@listbot.com
From: "Bush, Stephen F (CRD)" >
To: "'Kolmogorov-List'" >
Sent: Tuesday, December 18, 2001 2:57 PM
Subject: RE: Kolmogorov Mailing List
I am glad to see that this list is still alive.
We are trying to establish a tighter bridge between folks studying
complexity and those
who are studying communication networks.
We are inviting experts in the field of Complexity Theory, not necessarily
in communication networks,
to submit papers to a complexity and networking conference. Details on the
conference session can be
found in .
Please forward this to others who may be interested.
Thanks,
Steve Bush
____________________________________________________________
to unsubscribe write to Kolmogorov-List@gmx.net with subject unsubscribe
From: "Vladik Kreinovich" >
To: >
Sent: Wednesday, December 19, 2001 12:38 AM
Subject: FYI: an expert system using Kolmogorov complexity
Dear Friends,
Since the list has been revived, I am re-posting my message which I could
not
post due to the previous list's demise. Yours
Vladik
****************************************************************************
Date: Wed, 17 Oct 2001 14:55:01 -0600 (MDT)
From: Vladik Kreinovich >
Dear Friends,
At a recent IEEE Conference on Systems, Man, and Cybernetics, Stuart H.
Rubin
>from the Navy Research Center in San Diego presented a talk in which he
described a working expert system that, among other innovative ideas, uses
the
idea of Kolmogorov complexity. He is a specialist in intelligent system, and
he
was unaware that there is a thriving Kolmogorov complexity community which,
among other things, is interested in applications.
I therefore believe that it will be useful if I send this information to the
mailing list, so that interested people can look at his papers and, if
necessary, contact him directly; his email is srubin@spawar.navy.mil
I did not look into the details of his use, I am waiting for the conference
proceedings which will hopefully come shortly.
Meanwhile, I am attaching a list of some of
his papers that describe the use of
Kolmogorov complexity in the design and use of intelligent systems.
He has many, so if you are interested, you may contact him.
Vladik
------------- Begin Forwarded Message -------------
From: "Stuart H. Rubin" >
To: "Vladik Kreinovich" >
Vladik:
This should be what you requested.
cheers,
Stuart
S. H. Rubin, Evolving intelligent software processes, Intern. J. Comp.
Sci.
& Info. Mgt., at the invitation of R. Lee (Senior ed.), spring 2001, vol.
3,
no. 1, pp. 47-58.
S. H. Rubin, A heuristic logic for randomization in fuzzy mining, J.
Control
and Intelligent Systems, at the invitation of M.H. Hamza (ed. in chief),
1999, vol. 27, no. 1, pp. 26-39.
S. H. Rubin, Computing with words, IEEE Trans. Syst., Man, Cybern.: Part
B,
at the invitation of K.R. Pattipati, ed., submitted 6/19/98, SMC
098-06-B626, accepted as a regular paper Nov. 10, 1998, vol. 29, no. 4,
Aug.
1999, pp. 518-524: email: krishna@sol.uconn.edu (for files) and
krishna@snet.net otherwise.
S. H. Rubin, A fuzzy approach towards inferential data mining, Computers
and
Industrial Engineering, at the invitation of H. Eldin, 1998, vol. 35, nos.
1-2, pp. 267-270; reference CAIE 1152, journal code 399, papers at
, email to t.ewers@elsevier.co.uk (i.e.,
tennille Ewers) tel. (+44) (0) 1865 843000, 25 free offprints requested
June
19, 1998.
____________________________________________________________
to unsubscribe write to Kolmogorov-List@gmx.net with subject unsubscribe
From: "Nikolai Vereshchagin" >
To: "Kolmogorov-List" >
Cc: >; >; >;
>
Sent: Friday, December 21, 2001 8:29 AM
Subject: two meanings of the term "stochastic"
Dear collegues;
I want to discuss a terminology confusion concerning the usage of
the term "stochastic" in the Kolmogorov complexity literature.
I eighties Kolmogorov proposed a definition of
an \alpha,\beta-stochastic string.
An \alpha,\beta-stochastic string is a string $x$ that belongs
to a set $A$ of complexity at most \alpha such that
the randomness deficiency of $x$ in $A$,
$\log|A|-K(x|A)$, is at most $\beta$. This
definition was used in Shen's (1983), Vyugin's papers
and in the Muchnik-Semenov-Uspensky paper.
On the other hand, in 1987 Kolmogorov and Uspensky launched the
use of word "stochastic" to denote something different. Namely,
an infinite binary sequence in which the frequency of 1s is close
to 1/2 and the same holds for all subsequences chosen by
amissible rules. Depending on what is an admissible rule they
used terms "Church stochastic", "Kolmogorov--Loveland stochastic"
etc.
Formaly ther terms "stochastic" and "\alpha,\beta-stochastic" are
different. However I find this situation unsatisfactory.
My proposal is to choose another term to refer to sequences with frequency
stability properties. For instance, "balanced sequences". Then an
\alpha,\beta-balanced string would be a string from which any
selection rule of complexity at most $\alpha$ selects a
subsequence in which deviation of the frequency of 1s from 1/2 is
at most \beta (no term for this notion exists now). And a
"stochastic" sequence would be an infinite sequence that is
Martin--Lof random with respect to some computable measure (Levin
used the term "random" for this notion, and
Muchnik-Semenov-Uspensky the term "natural").
Nikolai Vereshchagin.
____________________________________________________________
to unsubscribe write to Kolmogorov-List@gmx.net with subject unsubscribe
From: "Vladik Kreinovich" >
To: >
Sent: Tuesday, December 25, 2001 9:40 PM
Subject: conference of possible interest to us
Dear Friends,
This conference specificvally emphasizes a minisymposium on chaos,
algorithms,
and complexity. In general, applications of Kolmogorov complexity to physics
and mechanics are welcome. Vladik
****************************************************************************
**
From: "Anton Krivtsov"
Sent: Monday, November 19, 2001 3:26 AM
Subject: APM'2002
Dear Colleagues,
On behalf of the organizing committee of APM'2002 we would like to
invite you to the conference "Advanced Problems in Mechanics - 2002".
The conference is being hosted by Institute for Problems in Mechanical
Engineering of Russian Academy of Sciences and will take place June 27
- July 6, 2002 at a suburb of St. Petersburg on the bank of Baltic
Sea. APM'2002 will be the 30th in the series of annual summer schools
held by Russian Academy of Sciences. The topic of APM'2002 is wide and
covers almost all fields of fundamental and applied mechanics.
Please find the first announcement of APM'2002 enclosed to this
letter. You can also find the detailed information at the website of
the conference:
We are looking forward to meet you in St. Petersburg this summer.
Yours sincerely,
Prof. Dmitry A. Indeitsev
Chairman of the Scientific Committee
Dr. Anton M. Krivtsov
Chairman of the Organizing Committee
------------------------------------------------
XXX Summer School "Advanced Problems in Mechanics"
June 27-July 6, 2002, St. Petersburg, Russia
First Announcement
A P M 2 0 0 2
To receive our announcements, please send a void e-mail with the subject
"subscribe" to
apm2002@eng.abdn.ac.uk
GENERAL INFORMATION
The Summer school Advanced Problems in Mechanics 2002 is organized by the
Institute for Problems in Mechanical Engineering of the Russian Academy of
Sciences (IPME RAS) under the patronage of the Russian Academy of Sciences
and Gesellschaft fuer Angewandte Mathematik und Mechanik (GAMM). The main
purpose of the meeting is to gather specialists from different branches of
mechanics to provide a platform for cross-fertilisation of ideas.
HISTORY OF THE SCHOOL
The first Summer School was organized by A.I. Lurie and Ya.G. Panovko in
1971. In the early years the main focus of the School was on nonlinear
oscillations of mechanical systems with a finite number of degrees of
freedom. The School specialized in this way because at that time in
Russia (USSR) there were held regular National Meetings on Theoretical
and Applied Mechanics, and also there were many conferences on
mechanics with a more particular specialization. After 1985 many
conferences and schools on mechanics in Russia were terminated
due to financial problems. In 1994 the Institute for Problems in
Mechanical Engineering of the Russian Academy of Sciences restarted
the Summer School. The traditional name of "Summer School" has been
kept, but the topics covered by the School have been much widened, and
the School has been transformed into an international conference. The
topics of the School cover now all fields of
mechanics and associated into interdisciplinary problems.
SCIENTIFIC COMMITTEE (changes are possible)
* A.K. Abramian (IPME RAS, St. Petersburg)
* V.V. Beletsky (Space Research Institute, Moscow)
* A.K. Belyaev (St. Petersburg State Technical University)
* I.I. Blekhman (IPME RAS, Mechanobr Research Institute, St. Petersburg)
* S.N. Gavrilov (IPME RAS, St. Petersburg)
* S.F. Golovaschenko (Ford Motor Company, USA)
* A. Guran (Institute of Structronics, Ottawa, Canada)
* D. Harris (University of Manchester, UK)
* A.B. Freidin (IPME RAS, St. Petersburg)
* D.A. Indeitsev (IPME RAS, St. Petersburg) -- Co-Chairman
* E.A. Ivanova (IPME RAS, St. Petersburg State Technical University)
* N.G. Kuznetsov (IPME RAS, St. Petersburg)
* D.P. Kouzov (IPME RAS, Marine Technical University)
* A.M. Krivtsov (IPME RAS, St. Petersburg State Technical University)
* I.A. Kunin (University of Houston, USA)
* G.A. Leonov (IPME RAS, St. Petersburg State University)
* M. McIver (Loughborough University, UK)
* P. McIver (Loughborough University, UK)
* N.F. Morozov (IPME RAS, St. Petersburg State University)
* V.A. Palmov (St. Petersburg State Technical University) -- Co-Chairman
* Y.G. Panovko (IPME RAS)
* H. Troger (Technical University of Wien, Austria)
* M. Wiercigroch (University of Aberdeen, UK)
* P.A. Zhilin (IPME RAS, St. Petersburg State Technical University)
* F. Ziegler (Technical University of Wien, Austria)
LOCAL ORGANIZING COMMITTEE
A.M. Krivtsov, St.Petersburg State Technical University, IPME RAS -
Chairman
A.D. Firsova, IPME RAS
S.N. Gavrilov, IPME RAS
E.F. Grekova, IPME RAS
E.A. Ivanova, St.Petersburg State Technical University, IPME RAS -
Scientific Secretary
A.D. Sergeev, IPME RAS
E.V. Serogo, IPME RAS
L. Sharipova, IPME RAS
E.V. Shishkina, IPME RAS
SCIENTIFIC PROGRAM
Presentations devoted to fundamental aspects of mechanics, or
spreading the field of applications of mechanics, are invited. We are
particularly keen to receive contributions that SHOW NEW EFFECTS AND
PHENOMENA OR DEVELOP NEW MATHEMATICAL MODELS. The topics of the School
cover all fields of mechanics, including, but not restricted, to
* mechanics of non-classical media
* solids and structures
* phase transitions
* nanostructures and thin films
* wave motion
* nonlinear dynamics, chaos and vibration
* dynamics of rigid bodies and multibody dynamics
* computational mechanics
* mechanical and civil engineering applications
Four different forms of presentations are offered, namely, plenary
lectures (1 hour), presentations at minisymposia (30 minutes), short
communications (20 minutes), and posters. The working language is
English. For the attention of Russian participants: the Russian
language as an exception can be used only in posters, but even for
posters English is strongly recommended. Regrettably we can not
provide simultaneous translation, and due to the international nature
of the School all the oral presentations must be in English.
MINISYMPOSIA AND CHAIRS (to be completed)
* Chaos - Algorithms - Complexity (I.A. Kunin)
* Mechatronics and structronics (A.K. Belyaev and H. Irschik)
* Multibody dynamics (E. Ivanova and H. Yoshimura)
* Nonlinear oscillations and stability (H. Troger and A.M. Krivtsov)
* Synchronization (I.I. Blekhman)
* Technological Plasticity (S.E. Alexandrov)
* Wave localization in continuous media (N.G. Kuznetsov, P. McIver,
M. McIver)
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
ORGANIZATIONAL DETAILS AND IMPORTANT DATES
Details concerning abstract submission and other organizational details
will be available at our website
and in subsequent announcements. If you wish to receive them, please send
a void e-mail with the subject "subscribe" to apm2002@eng.abdn.ac.uk
Deadlines:
* Abstract submission: January 15, 2002.
* Notification of acceptance: March 1, 2002.
* Submission of a visa form
(required to issue the invitation to Russia): March 31, 2002.
* Paper submission: June 27, 2002.
* Conference: June 27-July 6, 2002.
REGISTRATION FEE
The conference fee is $190 for foreign participants and covers the
invitation costs, postage, proceedings, social program, and
organizational costs. The fee for foreign accompanying persons is $60
and covers the invitation costs, postage, and social program. The
reduced conference fee of $15 for participants from Russia partially
covers the postage and proceedings.
CONFERENCE SITE
It is planned to hold the School in a suburb of St. Petersburg. The exact
location will be determined later. Please look at our website for
up-to-date information.
ST. PETERSBURG
In June St. Petersburg has nice warm weather and the famous
white nights can be experienced. St. Petersburg is called the "Venice of
the North", since it is located among numerous rivers and channels of the
delta of the Neva river. St. Petersburg is in fact the "cultural capital" of
Russia. The magnificent Hermitage and Russian Museum give a flavour of the
cultural image of the city, where numerous theatres and the famous
Russian ballet can be found. For those more inclined towards nature,
the Russian czars' residences, for example Pavlovsk, Peterhoff, and
Pushkin, can offer a pleasant visit; or alternatively one can take an
unforgettable boat trip through river Neva and the city channels.
=================================================================
== APM'2002 ==
== Institute of Problems of Mechanical Engineering ==
== of Russian Academy of Sciences ==
== Bolshoy pr. V.O., 61, St.Petersburg, 199178, Russia ==
== Phone: +7(812)-3214772; ==
== Fax : +7(812)-3214771 ==
== ==
== E-mail: apm2002@eng.abdn.ac.uk ==
=================================================================
____________________________________________________________
to unsubscribe write to Kolmogorov-List@gmx.net with subject unsubscribe