\newpage\subsection {Fast and Lean Computations.\label{paral}} Based on [Chandra, Stockmeyer 1976], we now reduce the computations with {\em large} time and {\em small} space to parallel ones (PPM implemented by sorting networks) with {\em large} space and {\em small} time, and vice versa. \paragraph{Parallelization.} Suppose Professor has in his office a program (for a Small Machine with linear space and exponential time) solving the next exam problems. You break into his office shortly before the test and get the tape. Your time is very limited, insufficient to run the program - but wait! - also in the office you find the password that gains you access to a {\em really} Big Machine, one with essentially unlimited space resources (exponential number of parallel processors, memory, etc.). How can you, the devious student, exploit in small time this vast resource to solve the exam? Generate simultaneously all possible configurations of the Small Machine with linear memory space, each as a separate process. There will be $M=n^{O(n)}$ of these configurations/processes, as there are $n^{O(n)}$ possible graphs with $n$ nodes and $O(n)$ edges. Each configuration/process $x$ computes a pointer $x\to x'$, where $x$ and $x'$ are successive configurations of Professor's Small Machine. Now, each process gets a copy of its successor's successor pointer: $x\to x'\to x''$ leads to $x\to x''$. Next, the single step pointers are erased and the procedure repeats for the 4-step pointers, 8-step pointers, etc. If the Professor's Small Machine halts, then it cannot repeat a configuration and must stabilize in time \$