Note: Latex formulas below require Java Scripts enabled for this page, for https://cdn.jsdelivr.net and https://www.cs.bu.edu/fac/lnd/mjx.js
You can also click for my papers (some online), my C V, or my research overview.

Holographic Proofs. by Leonid A. Levin.

This is a version of an article in Encyclopaedia of Mathematics, Supplement II, Hazewinkel, M. (Ed.), Kluwer, 2000 (ISBN 0-7923-6114-8)

Holographic proof, transparent proof, instantly checkable proof, probabilistically checkable proof, PCP are names of a form in which every proof or record of computation can be presented. This form has a remarkable property: the presence of any errors (essential deviations from the form or other requirements) is instantly apparent after checking just a negligible fraction of bits of the proof. Traditional proofs are verifiable in time that is a constant power (say, quadratic) of the proof length $n$. Verifying holographic proofs takes a polylogarithmic, i.e., a constant power of the logarithm of $n$, number of bit operations. This is a tiny fraction: the binary logarithm of the number of atoms in the known Universe is under 300.

(Of the names in the header, the phrase ``probabilistically checkable'' is somewhat misleading since both holographic and traditional proofs can be checked either deterministically or probabilistically, though randomness does not speed up the checking of traditional proofs.)

The transformation of an ordinary proof into the holographic form takes a slightly superlinear in $n$ number of bit operations. There are, however, four caveats: First, the verification is a so-called Monte-Carlo algorithm. It makes certain random choices, and any single round of verification has a chance of overlooking essential errors due to an unlucky choice. This chance never exceeds 50%, regardless of the nature of errors, and vanishes as $1/2^k$ with $k$ independent rounds.

Second, only essential errors (i.e., not correctable from the context) have this probabilistic detection guarantee. There is a proofreader procedure, also running in polylogarithmic Monte-Carlo time, that confirms or corrects any given bit in any proof accepted by the verifier. Of course, the instant verification can only assure the success of this proofreading; it has no time to actually perform it for all bits. An enhanced version can be made very tolerant: it can guarantee that any errors affecting a small, say 5%, fraction of bits will be inessential, correctable by the proofreader and tolerated by the verifier with high probability.

The third caveat is trivial but often overlooked. The claim must be formal and self-contained: one cannot just write a mathematical theorem in English. The translation to a formal system, e.g., Zermelo-Fraenkel set theory, may be quite involved. Suppose one wants to state the Jordan theorem that every closed curve breaks a plane into two disconnected parts. One must give a number of concepts from advanced calculus just to explain what a curve is. This requires some background from topology, algebra, geometry, etc. One must add some set theory to formalize this background. Throw in some necessary logic, parsing, syntax procedures and one obtains a book instead of the above informal one-line formulation.

Fourth, the claim which the proof is to support (or the input/output, the matching of which the computation is to confirm) also must be given in error-correcting form. Otherwise one could supply a remarkable claim with a perfect proof of its useless (obtained by altering one random bit) variation. Were the claim given in a plain format, such tiny but devastating discrepancies could not be noticed without reading a significant fraction of it, for which the instant verifier has no time. The error-correcting form does not have to be special: Any classical (e.g., Reed-Solomon) code, correcting a constant fraction of errors in nearly linear time, will do. Then the verifier confirms that the code is within the required distance of a unique codeword encoding the claim supported by the given perfect (when filtered through the proofreader) proof.

Despite these caveats the result is surprising. Some known mathematical proofs are so huge that no single human has been able to verify them. Examples are the four color theorem (verified with an aid of a computer, see [AHK]), the classification of simple finite groups (broken into many pieces, each supposedly verified by one member of a large group of researchers, see [Go]), and others. Even more problematic seems the verification of large computations. In a holographic form, however, the verification time barely grows at all, even if the proof fills up the whole Universe.

Some Details

Transforming arbitrary proofs into a holographic form starts with reducing an arbitrary proof system to a standard (not yet holographic) one: the domino pattern. A domino is a directed graph of two nodes colored with a fixed (independent of the proof size) set of colors. The nodes are renamed 1 and 2, so that the domino is the same wherever it appears in the graph. The domino problem is to restore a coloring of a graph, given the coloring of its first segment and a set of dominos appearing in this graph.

The graphs are taken from a standard family: only sizes and colorings can differ. An example is the family of shuffle exchange graphs. Their nodes are enumerated by binary strings (addresses) $x$. The neighbors of $x$ are obtained by simple manipulations of its bits: adding 1 to the first (or last) bit or shifting bits left. These operations (or their constant combinations) define the edges of the graph. The graphs are finite: length of $x$ is fixed for each. In the actual construction, $x$ is broken into several variables, so it is convenient to shuffle bits just within the first variable and permute variables (cycle-shifts of all variables and of the first two suffice). These graphs may be replaced by any other family with edges expressed as linear transformations of variables, as long as it has sufficient connectivity to implement an efficient sorting network.

A proof system is an algorithm that verifies a proof (given as input) and outputs the proven statement. Such an algorithm can be efficiently simulated, first on a special form of a random access machine and then on a sorting network. This allows one to reduce the problem of finding a proof in any particular proof system to the above standard domino problem.

Then comes the arithmetization stage. The coloring is a function from nodes of the graph to colors. Nodes are interpreted as elements of a field (a finite field or a segment of the field of rationals) and the coloring is a polynomial on it. This function is then extended from its original domain to a larger one (a larger field or a segment of rationals). The extension is done using the same expression, i.e., without increasing the degree of the coloring polynomial.

The condition that all dominos are restricted to given types is also expressed as equality to $0$ of a low degree polynomial $P(x)$ of a node $x=x_1,...,x_k$, its neighbors, and their colors. Over rationals, we need to check that $P^2(x)$ sum to 0, when $x_i$ vary over the original domain. (In finite fields, constructing the equation to check is trickier). A transparent proof is the collection of values of this expression where summation is taken only over the first several variables. The other variables are taken with all values that exist in the extended domain.

The verification consists of statistical checking that all partial sums (with possibly only a small fraction of deviating points) are polynomials of low, for the extended domain, degree. Then the consistency of the sums with their successors (having one more variable summed over) is checked too. This is easy to do since low degree polynomials cannot differ only in a small fraction of points: e.g., two different straight lines must differ in all points but one. Of course, all parameters must be finely tuned and many other details addressed. The above description is just a general sketch.

History

Holographic proofs came as a result of a number of previous major advances. The error-correcting codes based on low degree polynomials and their Fourier transforms were a major area of research since the 50s. Surprisingly, these techniques were not used in general computer theory until the middle of the 80s; one of the first such uses was for generating pseudo-random strings in [L]. A closer set of preceding results was a number of remarkable papers on the relations between high-level complexity classes associated with interactive proof systems. This exciting development was initiated by Noam Nissan in a 1989 electronically distributed article. It was quickly followed by improved theorems, which contained powerful techniques used later for the construction of holographic proofs. [LFKN], [Shamir], and, especially, [BFL] were the most relevant papers. [BFLS] introduced holographic proofs (called transparent proofs there). Another interesting application of similar techniques, due to [FGLSS], is in proving NP-completeness of approximation problems. These results were significantly extended in a number of subsequent papers.

References