Back to the root file of my research overview.

Hardness of Search Problems

Many combinatorial (search or NP) problems have easy-to-verify solutions. Examples: factoring, or finding short proofs of math theorems or short fast programs generating a given string. They can be stated as a task to invert a given, easy to compute, function (multiplication, extraction of a theorem from its proof, etc.). There have been in the 50s and 60s a long-standing controversy in the USSR about complexity of such problems. A powerful science official Sergei Yablonsky claimed that his 1959 theorem shows exponential complexity of some such problems. Kolmogorov disputed his claim, sometimes in sharp terms.

I was then very excited by Kolmogorov's 1965 use of universal algorithm for optimal approach to such concepts as complexity, randomness, information. I noticed that same ideas yield a fastest (up to a constant factor) algorithm for any search problem. To sound less abstract, I stated it for a specific problem (now called Tiling) and noted that this implies optimal algorithms for all search problems since they all reduce to Tiling (as it directly represents any TM computation). Kolmogorov was not impressed with optimal algorithm (seeing it too abstract and straightforward to be publishable) but was much more interested in my remark about Tiling's universality.

Later I got into a dispute with Yuri Alberton, my roommate in our undergraduate dorm. He was skeptical on usefulness of Theory, and gave combinatorial optimization methods as examples of real issues. I argued, these algorithms are just heuristics, and some combinatorial problems, like Tiling, are very likely unsolvable. He dismissed Tiling as a toy problem, but I said one can try to show this for other problems by reducing Tiling to them. I proceeded developing several such reductions. Kolmogorov found this very interesting and wanted me to publish it. I, however, viewed these problems as of narrow interest; I mentioned these reductions in talks, but thought only extending them to some popular problems, like graph isomorphism (I looked into constant degree case, now known to be in P), circuits minimization, or (with less hope) factoring, would be worth publishing. I tried for a long time, but eventually concluded that it would take somebody smarter than myself.

Finally, I resigned to a deal with Kolmogorov: he agreed to support my submission of the optimal algorithm result, on the condition that I also include the universality of several search problems. (I had then serious problems with authorities and feared that I may soon be banned from publishing, so I submitted several papers in that spring of 1972.) This work would forever stay in obscurity, but I later learned that fortune smiled on it in the form of US theorists Steven Cook and Richard Karp. Cook's paper with similar results (but much better written) was very influential in the West (though appearing in proceedings of a new conference STOC, then unknown and inaccessible in the USSR). The area became a big hit after Karp found reductions to many interesting problems. (See also my STOC-21 talk on inversion problems.)

A common misinterpretation of these results was that all NP-complete problems are hard, no chance for good algorithms. On this basis some such problems generated much hope in cryptography: the adversary would be helpless. Karp and others noticed that this was naive. While worst instances of NP-complete problems defeat our algorithms, such instances may be extremely rare. In fact, fast on average algorithms were found for a great many NP-complete problems. If all NP problems are easy on average, the P=?NP question becomes quite academic. Even if exponentially hard instances exist, those we could ever find might all be easy. Some problems (like factoring) seem hard for typical instances, but nothing is proven at all to support this (crucial, e.g., for cryptography) belief. These issues turned out to be subtle and it was not clear how a theory could distinguish intrinsically hard on average problems. [Levin 86], [Venkatesan, Levin STOC-1988, CPC 4/2/2018], [Impagliazzo, Levin, FOCS-90] proposed such a theory with first average case intractability results. Random (under uniform distribution) instances of some problems are now known to be as hard as random instances of any NP-problem under any samplable distribution. A number of subsequent papers extended this area substantially. See expositions in [Johnson 84], [Gurevich 85], and at www.cs.uml.edu/~wang/acc-forum/avg/ .