Note: Latex formulas below require Java Scripts enabled for this page, for and
You can also click for my papers (some online), my C V, or my research overview.

Randomness and Nondeterminism

by Leonid A. Levin

This is an HTML version of an article in Proc., International Congress of Mathematicians (v.2, Invited 45 Minute Addresses), Zurich, 1994. Birkhauser Verlag, Basel, 1995. I also added here some subsequent pointers.

Exponentiation makes the difference between the bit size of this line and the number ($<2^{300}$) of atoms in the known universe. The expulsion of exponential time algorithms from computer theory in the 1960s created a deep gap between deterministic computation and -- formerly its unremarkable tools -- randomness and nondeterminism. These two "freedoms" of computation preserved their reputation as some of the most mysterious phenomena in science and seem to play an ever more noticeable role in computer theory. We have learned little in the past decades about the power of either, but a vague pattern is emerging in their relationships.

A nondeterministic task is to invert an easy computable function. It is widely believed to require, in general, exponential computing time. Great efforts, however, have failed to prove the existence of such one-way (easy to compute, infeasible to invert) functions (owf). Randomness too preserves its mysterious reputation: it seems now, it can be generated deterministically. A simple program can transform a short random "seed" into an unlimited array of random bits. As long as the seed is not disclosed, no computational power that may fit in the whole universe can distinguish this array from truly random ones. This conjecture is proven equivalent to the existence of owf. I will survey a number of papers that have resulted in this theorem.

Although fundamental in many areas of science, randomness is really "native" to computer science. Its computational nature was clarified by Kolmogorov. He and his followers (see survey [Kolmogorov Uspenskii 1987]) built in the 1960s--1970s the first successful theory of random objects, defined roughly as those that cannot be computed from short descriptions. Kolmogorov also suggested in the 1960s that randomness may have an important relationship with nondeterminism; namely, that the task of finding a "non-randomness" witness (i.e. short fast program generating a given string) may be a good candidate to prove that exhaustive search cannot be avoided (in today's terms, P$<$NP).

The next step came from cryptography of the 1980s. Many applications require only few properties of random strings. So, they do not care about the fundamental issues of randomness, as long as some laws (e.g., of big numbers) hold. Cryptographers, in contrast, need to deprive the adversary from taking advantage of any regularities in their random strings, no matter how peculiar, making "perfect" randomness an important practical concern. [Blum Micali], [Yao] (see some other details also in [Levin 87]) used the idea of a hard core or hidden bit:

Suppose from a length-preserving one-way function $f(x)$ it is hard to compute not only $x$ but even its one bit $b(x)$ (easily computable from $x$). Assume that even guessing $b(x)$ with any noticeable correlation is infeasible. If $f$ is bijective (i.e. one-to-one), $f(x)$ and $b(x)$ are both random and appear to be independent to any feasible test, thus increasing the initial amount $n=|x|$ of randomness by one bit. Then, a short random seed $x$ can be transformed into an arbitrary long string $b(f^{(i)}(x)), i=1,2,\ldots$, which passes any feasible randomness test. [Blum Micali] established such a hard-core $b$ assuming a particular function (discrete log) to be one way. But if its inversion turns out to be feasible, all is lost. [Yao] uses a more general one-way candidate. It is less likely to be technically in P, yet is easy to invert for reasonable $|x|$ (it breaks input into small pieces and applies an owf to each of them).

[Levin 87] conjectured and [Goldreich Levin] and [Levin 93] proved that a boolean inner product $b(x,r)=(x_1r_1+...+ x_n r_n\bmod 2)$ provides every owf $f$ with a hidden bit $b$ of the same security . Actually, the (very efficient) construction used in [Levin 93] is quite simple and can be understood by non-experts. It yields "perfect" pseudo-random generators from any one-way bijection. (A lucid account of technical details can be found in [Knuth].) The bijection requirement was lifted in [Hastad Impagliazzo Levin Luby]. It showed that the possibility of deterministic generation of pseudo-randomness is exactly equivalent to the existence of owf. Thus, Kolmogorov's intuition that nondeterministic phenomena are intimately related with randomness proved to be accurate.