Note: Latex formulas below require Java Scripts enabled for this page,
for https://cdn.jsdelivr.net and https://www.cs.bu.edu/fac/lnd/mjx.js
Polynomial Time and Extravagant Models.
(By Leonid A. Levin)
Below is Section 2 of my article The Tale of One-Way Functions
in Probl.Inf.Transm. 39(1), 2003.
2.1. The Downfall of RSA
This development [RSA and other applications of one-way functions] was
all the more remarkable as the very existence of one-way (i.e., easy to
compute, infeasible to invert) functions remains unproven and subject to
repeated assaults. The first came from Shamir himself, one of the
inventors of the RSA system. He proved in [Inf.Process.Lett. 8(1) 1979]
that factoring (on infeasibility of which RSA depends) can be done in
polynomial number of arithmetic operations. This result uses a so-called
"unit-cost model," which charges one unit for each arithmetic
operation, however long the operands. Squaring a number doubles its
length, repeated squaring brings it quickly to cosmological sizes.
Embedding a huge array of ordinary numbers into such a long one allows
one arithmetic step to do much work, e.g., to check exponentially many
factor candidates. The closed-minded cryptographers, however, were not
convinced and this result brought a dismissal of the unit-cost model,
Another, not dissimilar, attack is raging this very moment.
It started with the brilliant result of Peter Shor [SIComp, 26(5) 1997].
He factors integers in polynomial time using an imaginary analog device,
Quantum Computer (QC), inspired by the laws of quantum physics taken to
their formal extreme.
2.2. Quantum Computers
QC has n interacting elements, called q-bits. A pure state of
each is a unit vector on the complex plane C2. Its two
components are quantum amplitudes of its two Boolean values. A state of
the entire machine is a vector in the tensor product of n
planes. Its 2n coordinate vectors are
tensor-products of q-bit basis states, one for each n-bit
combination. The machine is cooled, isolated from the environment nearly
perfectly, and initialized in one of its basis states representing the
input and empty memory bits. The computation is arranged as a sequence
of perfectly reversible interactions of the q-bits, putting their
combination in the superposition of a rapidly increasing number of basis
states, each having an exponentially small amplitude. The environment
may intervene with errors; the computation is done in an
error-correcting way, immune to such errors as long as they are few and
of special restricted forms. Otherwise, the equations of Quantum
Mechanics are obeyed with unlimited precision. This is crucial since the
amplitudes are exponentially small and deviations in remote (hundredth
or even much further) decimal places would overwhelm the content
completely. Peter Shor shows such computers capable of factoring in
polynomial time. The exponentially many coordinates of their states can,
using a rough analogy, explore one potential solution each and
concentrate the amplitudes in the one that works.
2.3. Small Difficulties
There are many problems with such QCs. For instance, thermal isolation
cannot be perfect. Tiny backgrounds of neutrinos, gravitational waves,
and other exotics, cannot be shielded. Their effects on quantum
amplitudes need not satisfy the restrictions on which error-correcting
tools depend. Moreover, nondissipative computing gates, even classical,
remain a speculation. Decades past, their existence was cheerfully
proclaimed and even proved for worlds where the laws of physical
interaction can be custom-designed. In our world, where the
electromagnetic interaction between electrons, nuclei, and photons is
about the only one readily available, circuits producing less entropy
than computing remain hypothetical. So, low temperatures have limits and
even a tiny amount of heat can cause severe decoherence problems.
Furthermore, the uncontrollable degrees of freedom need not behave
simply as heat. Interaction with the intricately correlated q-bits may
put them in devilish states capable of conspiracies which defy
2.4. Remote Decimals
All such problems, however, are peanuts. The major problem is the
requirement that basic quantum equations hold to multi-hundredth if not
millionth decimal positions where the significant digits of the relevant
quantum amplitudes reside. We have never seen a physical law valid to
over a dozen decimals. Typically, every few new decimal places require
major rethinking of most basic concepts. Are quantum amplitudes still
complex numbers to such accuracies or do they become quaternions,
colored graphs, or sick-humored gremlins? I suspect physicists would
doubt even the laws of arithmetic pushed that far. In fact, we know that
the most basic laws cannot all be correct to hundreds of decimals: this
is where they stop being consistent with each other!
And what is the physical meaning of 500 digit long numbers? What
could one possibly mean by saying "This box has a remarkable property:
its q-bits contain the Ten Commandments with the amplitude whose first
500 decimal places end with 666"? What physical interpretation could
this statement have even for just this one amplitude?
[Remark (not in the article):
Subtleties and paradoxes of probability lead to widespread confusions.
I heard questions about why similar issues do not plague tiny probabilities.
Exponentially small probabilities themselves do not have physical meaning.
Their logarithms, i.e. entropies, do. Entropies are falsifiable
e.g., by finding, for an experimental result, a code with significantly
fewer bits than its entropy under the predicted distribution.
But this (analyzed by Kolmogorov) link with the (observable) complexity,
only works because probabilities are always positive (or 0). Quantum
amplitudes have no such property, so cannot carry this meaning.]
Close to the tensor product basis, one might have opportunities to
restate the assertions using several short measurable numbers instead of
one long. Such opportunities may also exist for large systems, such as
lasers or condensates, where individual states matter little. But QC
factoring uses amplitudes of an exponential number of highly individualized
basis states. I doubt anything short of the most generic and direct use of
these huge precisions would be easy to substitute. One can make the
amplitudes more "physical" by choosing a less physical basis. Let us look
2.5. Too Small Universe
QC proponents often say they win either way, by making a working QC or
by finding a correction to Quantum Mechanics; e.g., in 2000/03/22
sci.physics.research usenet article "Re: When will quantum computers
become practical?" https://groups.google.com/group/sci.physics.research/msg/dc191983dfa1477b,
Peter Shor said:
"If there are nonlinearities in quantum mechanics which are detectable
by watching quantum computers fail, physicists will be VERY interested
(I would expect a Nobel prize for conclusive evidence of this)."
Consider, however, this scenario. With few q-bits, QC is eventually
made to work. The progress stops, though, long before QC factoring
starts competing with pencils. The QC people then demand some noble
prize for the correction to the Quantum Mechanics. But the committee
wants more specifics than simply a nonworking machine, so something like
observing the state of the QC is needed. Then they find the Universe too
small for observing individual states of the needed dimensions and
accuracy. (Raising sufficient funds to compete with pencil factoring may
justify a Nobel Prize in Economics.)
Let us make some calculations. In cryptography, the length
n of the integers to factor may be a thousand bits (and could
easily be millions.) By *n, I will mean a reasonable power of
n. A 2*n-dimensional space H
has 2exp(*n) nearly orthogonal vectors.
Take a generic v in H. The minimal size of a
machine which can recognize or generate v (approximately) is
K=exp(*n) --- far larger than our Universe.
This comes from a cardinality argument: 2*K
machines of K atoms. Let us call such v
There is a big difference between untested and untestable regimes.
Claims about individual megastates are untestable. I can imagine a
feasible way to separate any two QC states from each
other. Yet, as this calculation shows, no machine can separate a
generic QC state from the set of all states more distant from it than QC
tolerates. So, what thought experiments can probe the QC to be in the
state described with the accuracy needed? I would allow to use the
resources of the entire Universe, but not more!
Archimedes made a great discovery that digital representation of
numbers is exponentially more efficient than analog ones (sand pile
sizes). Many subsequent analog devices yielded unimpressive results. It
is not clear why QCs should be an exception.
2.6. Metric versus Topology
A gap in quantum formalism may be contributing to the confusion.
Approximation has two subtly different aspects: metric and topology.
Metric tells how close our ideal point is to a specific wrong one.
Topology tells how close it is to the combination of all unacceptable
(non-neighboring) points. This may differ from the distance to the
closest unacceptable point, especially for quantum systems.
In infinite dimensions, the distinction between 0 and positive
separation between a point and a set varies with topologies. In finite
dimensions, 0-vs.-positive distinction is too coarse: all topologies
agree. Since 2500 is finite only in a very philosophical
sense, one needs a quantitative refinement, some sort of a
weak-topological (not metric) depth of a neighborhood
polynomially related to resources required for precision to a given
depth. Then, precision to reasonable depths would be physical, e.g.,
allow one to generate points inside the neighborhood, distinguish its
center from the outside, etc.
Metric defines e-neighborhoods and is richer in that than
topology where the specific value of e is lost (only e
> 0 is assured). However, metric is restricted by the axiom
that the intersection of any set of e-neighborhoods is always
another e-neighborhood. Quantum proximity may require both
benefits: defined depth e and freedom to express it by
formulas violating the "intersection axiom." Here is an example of
such a violation, without pretense of relevance to our needs. Suppose a
neighborhood of 0 is given by a set of linear inequalities
fi(x)<1; then its depth may be taken as
1/sumi ||fi||. Restricting
x to the unit sphere would render this depth quadratically
close to metric depth. A more relevant formula may need preferred
treatment of tensor product basis.
2.7. The Cheaper Boon
QC of the sort that factors long numbers seems firmly rooted in science
fiction. It is a pity that popular accounts do not distinguish it from
much more believable ideas, like Quantum Cryptography, Quantum
Communications, or the sort of Quantum Computing that deals primarily
with locality restrictions, such as fast search of long arrays. It is
worth noting that the reasons why QC must fail are by no means clear;
they merit thorough investigation. The answer may bring much greater
benefits in the understanding of basic physical concepts than any
benefits factoring devices could promise. The present attitude is
analogous to, say, Maxwell selling the Daemon of his famous thought
experiment as a path to cheaper electricity from heat. If he did, much
of insights of today's thermodynamics might be lost or delayed.
The rest of the article ignores any extravagant models and stands
fully committed to the Polynomial Overhead Church--Turing Thesis: Any
computation that takes t steps on an s-bit device
can be simulated by a Turing Machine in sO(1)t
steps within sO(1) cells.