Note: Latex formulas below require Java Scripts enabled for this page, for https://cdn.jsdelivr.net and https://www.cs.bu.edu/fac/lnd/mjx.js

Computational Complexity

A number of my papers is dedicated to the concept of computational intractability. [Levin 73b] (see also: [Levin 96]) gave a complete characterization of possible complexity types of computable functions. It showed that each computable function $ F $ has a "critical" computable space bound (similar results hold for time) $ S $ i.e. such that any space s suits to compute $ F $ if and only if s $ =S/O(1) $ and is constructible (i.e. s can be a space of some algorithms, computing s or anything else). And inversely, $ S $ can be arbitrary for appropriate $ F $. (In fact, any closed-up set of constructible functions is the complexity type of some predicate as long as it is in $ \Sigma^0_2 $, i.e. defined with 2 quantifiers, and is closed under $ \min(f,g)/2 $.) Related results were also (later but independently) published by Schnorr, Stumpf, and by Meyer (under a name the Fundamental Theorem of Complexity). Only special cases, like Compression Theorem of Rabin or Speed-Up Theorem of M.Blum, were known before. See exposition in [Seiferas, Meyer 95]. For tasks of inverting easily computable functions, complexity is even simpler. A result in [Levin PPI-73] shows that any problem in this important class (constructive NP) has an algorithm which cannot be sped-up by more than a constant factor even on a subset of inputs.

[Levin, FOCS-88], studied invariant characteristics $ f(x) $ of strings $ x $. It set no restrictions at all (even computability) on $ f $, except invariance $ f(x)= f(c(x)) \pm O(1) $ under simple encodings (i.e. change of representation) $ c(x) $. In contrast, it restricted $ c $ severely, allowing only $ c $ computable and invertible in polynomial time and preserving length $ \|x\|\pm O(1) $. Two examples of $ f $ are $ \|x\| $ (artificially required to be invariant) and Kolmogorov complexity $ K(x) $. I first considered strings $ x $ generated with polynomial time computable probability distributions $ m $. On such $ x $ all invariants $ f $, despite their generality, turn out to be determined by $ \|x\|,K(x) $, i.e. $ f(x)=f'(K(x),\|x\|)\pm O(\log(r(x))) $, where the expectation $ \sum_x m(x)r(x) $ is $ \le1 $. It still remains a challenge to find an elementary proof of this general statement. (Known proof uses constructive expanders.) Strings with samplable distributions turn out to be much richer with many more invariants possible.

Several of my papers study models of computations. In [Gacs, Levin 81] we consider the general hardware-independent model: "causal nets". It allows to introduce general (but tight) measures of complexity and treat other measures as geometric issues of embedding the net into the space-time of a particular hardware. Causal nets also allow an exact characterization of computability of recursive functions which allow symmetric input configurations: some are not computable since initial symmetries cannot be broken deterministically.

[Itkis, Levin, FOCS-89] considers VLSI models. Traditional ones assume wires of constant width, taking most of chip's area, often squaring it. These models also assume switching time depending only on chip's diameter $ D $, even if all wires are much shorter than $ D $. Later models allow switching time $ f(d) $ to decrease with wires length $ d $ (e.g., linearly, or even quadratically.) We show that for superlinear (even slightly) $ f $, the chips' computational power loses any sensitivity to wires width. It is as strong as it would be if wires' were infinitesimally thin, all area used for nodes. We simulate such wires by arranging cooperation of nodes to transport data to each other in parallel so that no delays could occur even in the worst case.

Back to the root file of my research overview.