Note: Latex formulas below require Java Scripts enabled for this page, for https://cdn.jsdelivr.net and https://www.cs.bu.edu/fac/lnd/mjx.js

Algorithmic Information Theory

Kolmogorov Information theory applies to individual objects, in contrast to Shannon theories that apply to the models of processes that generate such objects. It thus has a much wider domain since generation of many objects (e.g., Shakespeare plays) have no realistic models. Of course just having full information is not enough to construct the object with reasonable efforts. Yet purely informational bounds are still very informative and often more transparent than more special barriers.

Solomonoff and Kolmogorov made a key observation that Turing's Universal algorithm allows an optimal definition of complexity - the bit-length of the shortest description of an object, independent, up to a constant, from the programming language. Kolmogorov used it to define the concepts of mutual information and randomness. He and myself proved the first technically involved result there that mutual information is symmetric (like in Shannon's case, but only approximately): a 1967 MMS Kolmogorov's talk, and the proof in [Zvonkin, Levin 70]).

Solomonoff used the universal algorithm for the notion of an a priory probability. His approach had technical difficulties, but was very important conceptually. The difficulty was in using measures (having linear normal functionals as expectations): this did not match the behavior of algorithms that may diverge. I took a different approach -- enumerable semimeasures (with concave expectations, so, e.g., $ p(x) $ could exceed $ p(x0)+p(x1) $). Among those there is a universal one M (largest up to O(1) factors); it worked correctly producing a solid theory ([Zvonkin, Levin 70]). Its log (for integers equal to the shortest length of prefixless codes) turned out to be a better measure of complexity. It satisfied many desirable properties exactly (where the original definitions had logarithmic discrepancies).

While fundamental in many areas of Science, randomness is really ``native'' to Computer Science. The computational nature of randomness was clarified by Kolmogorov, though his complexity characterizations worked only for finite domains. [Levin 73a], used M to characterized randomness in general (similar results were independently obtained in [Schnorr 73]). [Kolmogorov Uspensky 86] gives detailed explanations.

I used M to improve the definition of mutual information, and, over the years, proved a number of other basic results in these areas. For instance, [Levin 76a] extended randomness tests to non-computable distributions. This is technically involved, but allows to include as a special case other important concepts such as mutual information. [Levin 76b] gave a uniform description to a considerable variety of complexity measures found till then and to all other possible variations. [Levin 84] gives (a very technical) account of many results.

One of my recurrent ideas was to define and use the concept of independence of sequences (somewhat orthogonal to the concept of Turing reducibility and equivalence). It means they have a finite (or small in the case of finite strings) mutual information. I give an elaborate definition of mutual information (tricky for the infinite sequences and their finite prefixes) and discuss a modification of Church-Turing's thesis, called Independence Postulate. It states that only sequences independent of those defined mathematically can exist in the physical world. (Recursive sequences can be both physical and mathematical, but they are independent of themselves: have little self-information.)

This postulate allows to dismiss great many possible mathematical complications as prohibited to appear in the physical world. I give a number of applications for foundations of probability theory, theory of algorithms, logic, data extrapolation, etc. Thus [Levin 74], [Levin 84] resolve some tricky foundational issues of Probability showing that a set of sequences has measure zero if and only if it consists of low complexity sequences and sequences with infinite information about a single common sequence.

Another example [JACM 60(2) 2013] applies it to dismiss Goedel's belief that despite his famous theorem any arithmetic question can be resolved by unending informal process of figuring out the right axioms and adding them consistently. I prove that any set of arithmetic axioms, for all n, either (1) is inconsistent, or (2) leaves some n-bit statements unresolved or (3) has nearly n-bit information about the Halting Problem. (3) is prohibited by Independence Postulate: such objects thus cannot be produced, whatever formal or informal means are used.

Back to my research overview root.