Next: 5.5 Cryptography. Up: 5 Randomness in Computing. Previous: 5.3 Randomness and Complexity.

## 5.4 Pseudo-randomness.

The above definition of randomness is very robust, if not practical. True random generators are rarely used in computing. The problem is not that making a true random generator is impossible: we just saw efficient ways to perfect the distributions of biased random sources. The reason lies in many extra benefits provided by pseudorandom generators. E.g., when experimenting with, debugging, or using a program one often needs to repeat the exact same sequence. With a truly random generator, one actually has to record all its outcomes: long and costly. The alternative is to generate pseudo-random strings from a short seed. Such methods were justified in [Blum Micali], [Yao]:

First, take any one-way permutation (see sec. 5.5) with a hard-core bit (see below) which is easy to compute from x,p, but infeasible to guess from with any noticeable correlation.
Then take a random seed and Repeat: ().

We will see how distinguishing outputs S of this generator from strings of coin flips would imply a fast inverting of F (believed impossible).

But if P=NP (a famous open problem), no one-way F, and no pseudorandom generators could exist.

By Kolmogorov's standards, pseudo-random strings are not random: let G be the generator; s be the seed, , and . Then , so this violates Kolmogorov's definition. We can distinguish between truly and pseudo- random strings by simply trying all short seeds. However this takes time exponential in the seed length. Realistically, a pseudo-random string will be as good as a truly random one if they can't be distinguished in feasible time. Such generators we call perfect.

Theorem: [Yao] Let run in time . Let a probabilistic algorithm A in expected (over internal coin flips) time accept and truly random strings with different by d probabilities. Then, for random i, one can use A to guess from in time with correlation .

Proof: Let be the probability that A accepts modified by replacing its first i digits with truly random bits. Then is the probability of accepting and must differ by d from the probability of accepting random string. Then , for randomly chosen i. Let and be the probabilities of acceptance of r0x and r1x for random r of length i-1. Then averages to for , while averages to and to . So, has the stated correlation with . Q.E.D.

If the above generator was not perfect, one could guess from the sequence with a polynomial (in ) correlation. But, can be produced from . So, one could guess from with correlation , which cannot be done for hard-core B.

#### Hard Core.

The key to constructing a pseudorandom generator is finding a hard core for a one-way F. The following B is hard-core for any one-way F, e.g., for Rabin's OWF in sec. 5.5. [Knuth 1997, v.2, 3d ed., Chap.3.5.F Pseudorandom numbers, pp.36, 170-179] has more details and references.

Let . [Goldreich Levin] converts any method g of guessing from with correlation into an algorithm of finding x, i.e. inverting F (slower times than g).

Proof. Take k=|x|=|y|, , . Let and .

Assume, for , guesses with correlation , where abbreviates , since w,y are fixed throughout the proof.

Averaging over random pairwise independent p deviates from its average by (and so is >0) with probability . The same for .

Take a random matrix . Then the vectors Pr, are pairwise independent. So, for a fraction of P, sign. We could thus find for all i with probability if we knew z=xP. But z is short: we can try all possible values!

So the inverter, for a random P and all i,r, computes . It uses Fast Fourier on to compute . The sign of is the i-th bit for the z-th member of output list. Q.E.D.

Next: 5.5 Cryptography. Up: 5 Randomness in Computing. Previous: 5.3 Randomness and Complexity.

Leonid Levin
Wed Aug 21 20:35:42 EDT 1996