\newpage\section {Models of Computations; Polynomial Time \& Church's
Thesis.} \label {models}
\subsection {Deterministic Computation.}
Sections~\ref {models},\ref{diagonal} study deterministic computations.
Non-deterministic aspects of computations (inputs, interaction, errors,
randomization, etc.) are crucial and challenging in advanced theory and
practice. Defining them as an extension of deterministic computations is
simple. The latter, however, while simpler conceptually, require
elaborate models for definition. These models may be sophisticated if we
need a precise measurement of all required resources. However, if we
only need to define what is computable and get a very rough magnitude of
the needed resources, all reasonable models turn out equivalent, even to
the simplest ones. We will pay significant attention to this surprising
and important fact. The simplest models are most useful for proving
negative results and the strongest ones for positive results.
We start with terminology common to all models, gradually making it more
specific to the models we actually study. {\bf\em Computations} consist
of {\bf\em events} and can be represented as graphs, where edges between
events reflect various relations. Nodes and edges will have attributes
called labels, states, values, colors, parameters, etc. We require
different labels for any two edges with the same source. Edges of one
type, called {\bf\em causal,} run from each event $x$ to all events
essential for the occurrence or attributes of $x$. They form a directed
acyclic graph (though cycles are sometimes added artificially to mark
the input parts of the computation).
We will study only {\bf\em synchronous} computations. Their nodes have a
{\bf\em time} parameter. It reflects logical steps, not necessarily a
precise value of any physical clock. Causal edges only run between
events with close (typically, consecutive) values of time. One event
among the causes of a node is called its {\bf\em parent}. {\bf\em
Pointer} edges connect the parent of each event to all its other
possible causes. Pointers reflect the connection between simultaneous
events that allows them to interact and have a joint effect. The
subgraph of events at a particular value of time (with pointers and
attributes) is an instant memory {\bf\em configuration} of the model.
Each non-terminal configuration has {\bf\em active} nodes/edges around
which it may change. The models with only a single active area at any
step of the computation are {\bf\em sequential}. Others are called
{\bf\em parallel}.
\subsubsection* {Complexity.}
The following measures of computing resources of a machine $A$ on input $x$
will be used throughout the course:
{\bf\em Time}: The greatest depth $D_{A(x)}$ of causal chains is the number of
computation steps. The volume $V_{A(x)}$ is the combined number of active
edges during all steps. Time $T_{A(x)}$ is used (depending on the context)
as either depth or volume, which are close for sequential models.
{\bf\em Space}: $S_{A(x)}$ or $S_A(x)$ of a synchronous computation is
the greatest (over time) size of its configurations. Sometimes excluded
are nodes/edges unchanged since the input.
Note that time complexity is robust only up to a constant factor: a machine can
be modified into a new one with a larger alphabet of labels, representing
several locations in one. It would produce identical results in a fraction of
time and space (provided that the time limits are sufficient for the
transformation of the input into and output from the new alphabet).
\paragraph {Growth Rate Notations:} $f(x)=O(g(x))$\footnote
{This is a customary but somewhat misleading notation.
The clear notations would be like $f(x)\in O(g(x))$}
$\iff g(x)=\Omega(f(x)) \iff \sup_x {f(x)\over g(x)} <\infty$.
$o,\omega: f(x)=o(g(x)) \iff g(x)=\omega(f(x))
\iff \lim_{x \to \infty}{f(x) \over g(x)}=0$.
$\theta: f(x)=\theta(g(x)) \iff$ ($f(x)=O(g(x))$ and $g(x)=O(f(x))$).
Here are a few examples of frequently appearing growth rates: negligible $(\log
n)^{O(1)}$; moderate $n^{\theta(1)}$ (called polynomial or P, like in P-time);
infeasible $2^{n^{\Omega(1)}}$; another infeasible: $n!= (n/e)^n\sqrt{t+2\pi
n},\,t\in[1,2]$.
The reason for ruling out exponential (and neglecting logarithmic)
rates is that the visible Universe is too small to accommodate exponents.
Its radius is about 46.5 giga-light-years $\sim2^{204}$ Plank units.
A system of $\gg R^{1.5}$ particles packed in $R$ Plank Units radius
collapses rapidly, be it Universe-sized or a neutron star.
So the number of particles is $<2^{306}\ll 4^{4^4}\ll 5!!$.