BU CAS CS 680 and CS 591: Readings in Computer Graphics

Class commentary on articles: Texture Modeling



Jeremy Biddle

An Image Synthesizer
Ken Perlin

This paper introduces the concepts of a Pixel Stream Editor and a solid
texture scheme.  The Pixel Stream Editor is essentially a high level
language used to apply a procedural texture to an input image, in order to
apply a three dimensional texture to an image.  The language presents a
program that is applied to each pixel in the input image.  The input image
contains information about a pixel, such as its distance, it's normal
vector, as well as any other information that may be useful.  By
associating different surfaces with integer indices, multiple objects can
be together rendered in a scene with different three dimensional surface
textures.

The approach is interesting in that instead of "wrapping" textures on to a
three dimensional object, the object is "sculpted" out of the texture that
it uses.  Since the textures are essentially defined as programs, they are
easily modified.  And judging from the variety of effects shown in the
article, the programs can vary in their possibilities.

>From a purist point of view, the rendering suffers from one of the main
failings of texture mapping and bump mapping -- the edges of the object do
not represent the shapes of the texture, but rather the shape of the
object.  What also seemed unclear was how the texture would be applied to
the object, particularly in cases of animation.  Since the object is
"sculpted" out of the texture, what happens when the object moves?  The
texture would most likely appear to swim around on the surface of the
texture.  The only instances of animation discussed in the paper were those
involving the animation of the texture itself, rather than applying the
program to multiple frames of an object's movement.

Although the rendering of a completed image is quite slow ( > 10 minutes),
the low resolution interactive design mode provides much faster feedback.
Still though, when designing a new texture from scratch, it is conceivable
that faster rendering would be required, although compilation (rather than
interpretation) and optimizations would most likely speed it up.


Roberto Downs


Bob Gaimari


Daniel Gentle


John Isidoro


Dave Martin


			    An Image Syntheizer

				Ken Perlin


This paper describes a Pixel Stream Editing (PSE) language and provides
multiple examples of programs and outputs.  PSE can be thought of as a
specification for a SIMD process that independently transforms input
elements, which may be pixels, surface normals, or whatever is convenient.
Most of the examples in the paper assume that the input consists of surface
geometry (position and normal vector pairs) and the output is RGB pixels.

Since the processing is independent, one has to wonder how spacially
coherent effects are formed.  For instance, most of the donut images are
splotchy.  Bozo's donut looks very much like a reaction-diffusion example,
and its program "color = Colorful[Noise(k*point)]" doesn't immediately
explain the coherence.  I believe the answer has to do with the
implentation of the Noise() function; the sample implementation chooses
random vectors at integer lattice points and interpolates between these
points with a cubic polynomial.  Apparently, the input image has been
provided at a scale where the mean splotch size corresponds nicely to the
integer lattice.  In other words, it's a side-effect of the input scale!
It's a cheap, good one, though, that is apparently used again in the water
crystal, art glass, and probably other images.

These techniques are most useful as a replacement for texture mapping and
other local effects.  And the results are stunning: look at the soap bubble
reflecting a window and the corona around the eclipsed sun.  The iteration
time for trying out effects is small, mostly because of the filtering
nature of the scheme; the geometry is rendered in 2-D (even if some of the
3D information is still present) by the time it gets to the PSE program.
The author has taken a good insight---that surface textures do not need
(much) underlying geometry awareness---and has turned it into a modular,
fast, and easy-to-understand scheme that produces terrific output.



     Generating Textures on Arbitary Surfaces Using Reaction-Diffusion

				 Greg Turk


In the first part of the paper, the author presents a brief history of
reaction-diffusion techniques and a mini-tutorial describing how they work.
By applying subsequent reaction-diffusion to the output of an initial
reaction-diffusion system, the author shows how to generate interesting
patterns such as leopard spots and other features of multiple sizes.  The
idea is agreeable, but the actual production of patterns depends on setting
numerical parameters in a nonintuitive way.

The author is primarily concerned with generating reaction-diffusion
patterns on 3-D surfaces, so the second part of the paper describes how to
map the classical 2-D reaction-diffusion system onto polyhedral surfaces.
Points are randomly distributed onto the surface, and a repulsion-force
relaxation process disperses them evenly.  This process essentially takes
place in 2-D; the resulting errors due to the 3-D model don't appear to
matter very much.  The author also uses a fast approximation of the
construction of Voronoi regions in order to tile the surface into cells.
Given this tiling, the reaction-diffusion system proceeds pretty much as in
the 2-D regular grid case.  One might suspect that the approximations would
cause bad effects at corners, but no such artifacts are visible in the
supplied figures.

A smooth function of the surface chemical concentration is used to select
colors in the 3-D rendered image.  The author describes a blurring
technique in which the color map defined on the surface (as above) is
allowed to diffuse over the surface; this can be used to antialias the
image. 

The results are interesting, but the system runs slowly and requires users
to control strange numerical parameters.  The reaction-diffusion engine is
an approximation to what could be a chemical reality; it produces plausible
output, but strikes me as a very brute-force approach to generating stripes
and dots.  Cellular automata seem to be able to do this rather quickly with
nearest-neighbor rules.  This paper may turn out to be more of a validation
of chemistry insight than a much-needed graphics technique.


John Petry


"AN IMAGE SYNTHESIZER," by Ken Perlin

Of the topics this paper, I most liked the space function approach.  It's 
a wonderful way to avoid mapping distortions and aliasing problems if you 
have a texture that can be defined procedurally.

Perlin's language seems useful.  It's such a straight extension of C/C++ 
that much of it could simply be added to C++ without creating an entirely 
new language (granted, the paper was written back in 1985).  

He also says some of the best effects were achieved "by stepping over (real
or imagined) semantic distinctions."  This does not seem like a feature!  
He doesn't go into detail, though, so I'm not sure what he's refering to.

His texture is mapping done using variations a 3-D noise function which 
produces stochastic surface effects.  The noise function can be used by
itself, or it can be combined with other approaches, e.g. self-similar 
noise at different scales, or turbulence effects.

These effects seem interesting for most simple cases.  It may be due to
poor quality of the black-and-white photo reproductions, but the more
complex cases didn't seem convincing, except perhaps for the sunset on
water.  I have a hard time believing that this technique works better
than particle animation in creating synthetic 3-D fire or cloud formations.
On the other hand, I'll certainly grant his point that this approach
runs much faster; no CM-2 required.


"GENERATING TEXTURES ON ARBITRARY SURFACES USING REACTION-DIFFUSION,"
by Greg Turk

Turk uses reaction-diffusion equations to model the formation of stripes,
spots and other natural patterns as textures on surfaces.  The reaction
diffusion approach, combined with some randomization to add realism,
seems to work well.

Though not central to his paper, he points out that it is not known whether
such a mechanism is actually used in nature to produce these effects.
It's not clear to me that it is necessary even for his approach; his results
don't seem that different than what could be achieved using a sine function
and a large noise factor.  (Not that this alternative would automatically
be better -- it's just an idea to throw out).

Turk uses a mesh as a starting point for his reaction-diffusion approach.
Again, I'm not certain why he needs all the work he does.  Consider the
case of simple spots.  To generate the mesh, he randomly positions a set
of points, then moves them until they are evenly distributed across the
surface.  From there, he invokes his reaction-diffusion equations applied
at each mesh point.  But why go to all this work?  Why not just run the
mesh-node relaxation algorithm half as long as he does, so that the 
distribution of points is more roughly distributed, and use the points as
the centers of spots of stochastically-varying size?  Again, this is just
an idea I'm tossing about.  I can see how the reaction-diffusion approach
does a good job on stripes, and I'm not sure how I'd simplify that.

Robert Pitts

An Image Synthesizer
by Ken Perlin
====================

This article presents a method for generating synthetic images using a
pixel stream editor.  In this particular scheme, the same transformation is
applied to each pixel that forms an image, producing a transformed image.
Several types of transformations are presented that produce desired
effects.  The pixel editing implementation include a high level language
that allows the specification of transformations to be done quickly,
supporting rapid prototyping of image effects.

Because the model used causes the same transformation to be applied to
every pixel, its implementation lends itself to a parallel data model of
computing.  The author spends some time glorifying the interactive language
for editing pixels; essentially, he has borrowed the syntax from C, the
concept of lists from Lisp, and the parallel data model from languages like
C* (C-star).  In this sense, the language is not novel.  However, because
the underlying purpose of the language is to edit images, there are
opportunities to optimize the implementation of the language for image
processing.

The editor allows a "pixel" to be represented by an arbitrary set of
values.  For example, the programmer could choose [red green blue] tuples
or [hue saturation value] tuples.  It is the responsibility of the
programmer to know how pixels are represented and use them correctly.  This
decision by the author seems to be a vote for flexibility over the
"automated checking" that would be available with a more constrained
system.

The author presents a particular paradigm for defining the texture of
objects that he uses to create several effects.  The idea is to use space
functions which define values for colors or intensities or whatever at each
3-D point (x,y,z).  This allows the creation of solid textures; thus, to
texture a particular object, one evaluates the space function at visible
points along the surface of the object being textured.  Advantages of this
approach are that one no longer needs to map (or "wrap") a 2-D texture onto
an object.  In addition, 3-D textures support some of the same
optimizations as 2-D textures, such as defining a texture procedurally, to
reduce the space needed to store the texture.

Because the pixel editor uses a parallel data model, in order to produce
images that are not extremely regular, the editing programs must either be
very complex or contain some stochastic variations.  Nonetheless, any
function introducing randomness must adhere to a set of properties.  It
must be invariant over translation and rotation (we don't want the texture
of an object changing just because we move it from one place to another)
and it must vary within a small band of frequencies (easier to control
scaling?).  The function they use, Noise(), involves a lookup table defined
at integer (x,y,z) values.  The lookup table contains the pseudo-random
values corresponding to an integer (x,y,z) tuple found in the table.  For
(x,y,z) values not found in the table, the pseudo-random value is
interpolated between values using an assumption of smoothness.  In addition
to using the Noise() function for perturbations, they also use a function
Dnoise(), which returns a vector of the x,y,z differentials of the Noise()
function.  It is useful when wanting to perturb the three components of a
3-D vector; in contrast, Noise() only gives a single scalar for a
particular point.

Several texture implementations are explained using these Noise() and
Dnoise() functions.  The main idea is to create more complex "behavior" by
function composition.  The author's techniques include perturbations of
surface normals, by making perturbations at different spatial frequencies
(here, the amount of perturbation is scaled with spatial frequency).  These
techniques were used to produce wrinkles and wave-like patterns.  Note that
by changing the surface normal, the geometry is not adjusted but the
lighting of the surface will change.  Other methods use noise to determine
the "colors" of a model, where the perturbations are either a function of
space or time.  Color perturbations as a function of space are used to
produce marble textures and a star's corona.  By adding a temporal
component, one can achieve animation of the earlier-mentioned wave or the
corona.

The author argues for the strengths of this function composition paradigm.
He points out that it may not always produce the most time efficient
algorithm but it is very useful for rapid prototyping and once an
appropriate algorithm has been found to produce a desired effect it can be
optimized (in a compiled language or whatever).  Furthermore, the author
notes that some very complicated looking effects can be simulated by
composing some simple functions.  His argument for being able to rapid
prototype is a good one I think; it is likely that many effects can be
prototyped at low resolutions and rendered at a higher resolution when the
desired effect has been achieved.

Some future directions for this work include improvements in efficiency and
broadening the use of stochastic variations to control more types of motion
and the generation of shapes.

To summarize, this paper presents a useful approach to generating color,
textural, motion, etc. effects that have some sort of regularity, either
deterministic or stochastic regularity.  It is not, however, appropriate
for defining complex polygonal shapes, etc.; these are best done by more
traditional paradigms.


Generating Textures on Arbitrary Surfaces Using Reaction-Diffusion
by Greg Turk
============

The paper describes a procedural method of texture-mapping onto graphical
images that is modeled after a chemical process, reaction-diffusion.  The
author emphasizes the 3 stages of texture mapping: (1) acquiring a texture,
(2) mapping the texture to a surface, and (3) sampling the texture for
rendering.  He notes the dependence between these 3 tasks and shows the
benefit of performing them interdependently (instead of as disjoint
operations) for this particular class of textures.

Past work in texture creation, done both procedurally and by painting
techniques, is described.  These include using sine waves, stochastic
variations, and previous work using reaction-diffusion.  In addition, the
idea of specifying texture characteristics in the frequency domain and then
mapping back to the spatial domain is mentioned.  Although this description
of past work is minimally adequate, a more detailed description of the past
work in reaction-diffusion would have been useful in contrasting it with
this new work.

Next, methods for mapping an achieved texture onto a surface are
described.  They include: mapping into the "natural coordinates" of an
object (e.g., from x,y texture points to latitude/longitude on a sphere)
and "projecting" textures onto objects.  The limitations of these
techniques occur when there is no natural coordinate system to which to map
or when mappings causes the texture to be distorted.  One alternative
technique is to define three-dimensional textures, so that the texture can
be chosen by simply evaluating the texture at 3-D points along the surface
of the object (assuming non-transparent objects).  Of course, this works
for textures that occur in 3 dimensions and may exclude some other types of
textures.  Another method is to capture the statistical properties of a
texture and reproduce is procedurally (most likely by using a random number
generator) on the surface of an object.

The theoretical model of reaction-diffusion is presented.  The model uses
differential equations to describe how the local concentration of a
chemical changes over time.  The same model is used for each of a set of
chemicals: a chemical's concentration at any given point is controlled by
how it reacts with the the other chemicals and how it diffuses (or
migrates) to neighboring areas.  Any final pattern achieved by a particular
set of reaction-diffusion equations is the result of solving the
differential equations of the system at its steady state.  By using
different initial starting conditions and parameters (e.g., diffusion
rate), one can achieve different textures with the same basic appearance
(e.g., stripes or spots).  Actual implementations of the model use discrete
versions of the differential equations applied to discretized location.
This is likely done because it is more efficient.

The author extends this type of reaction-diffusion system.  He discusses
how to run two simulations to achieve a variety of spotted and striped
effects.  The basic algorithm is to run one simulation, freeze the values
of chemicals in the spotted or striped regions and then run another
simulation with different parameters.  Methods for removing some of the
randomness in patterns are described--basically the diffusion of chemical
at specific locations is controlled more explicitly to achieve a desired
affect.

Because this method uses a discrete version of reaction-diffusion, surfaces
on an object to be textured must be discretized.  The method used is the
following: (1) uniformly distribute a number of points on the surface, (2)
regularly space the points by introducing repelling forces between the
points and using relaxation to adjust the point positions until a stable
state is reached, and (3) determining (Voronoi) regions around these points
that "tile" the object to be textures.  Asymmetry introduced into this
process is shown as useful in constructing interesting effects.

Once an objects have been discretized, the chemical simulation can be
performed.  Since the simulation assigns chemical values to each
discretized region, they must be a way to assign chemical concentrations to
pixels along the continuum between the centers of the regions.  The author
uses a technique that interpolates the chemical values using some set of
adjacent regions.  The smaller the set of adjacent regions, the more
distinct the separation between features (e.g., spots, stripes, etc.);
larger regions will give smoother transitions between features.  Note that
this interpolation can help reduce aliasing.  Furthermore, the amount of
work needed for the interpolation scales regularly with the number of
pixels to be rendered, so smaller images require less work (assuming all
other parameters are equal).

Once chemical concentrations are assigned to each pixel, mapping to
specific colors can be performed.  To handling aliasing effects that can
still occur when images are at a small scale, the author describes how
colors can be assigned to each region (as red, green, blue triples) and the
colors can be allowed to diffuse with the same diffusion procedure as the
chemicals.  Diffusion of colors for the two sets of features (opposing spot
colors, e.g.) is done separately and then the resulting colors for a given
pixel are averaged.  The connection between this color diffusion technique
and using gaussian filters is discussed.  Their color diffusion model is
nice because if fits in with the chemical model and likely represents a
performance improvement over using gaussians.

An implementation of bump mapping is described in this scheme.

The implementations of this method, despite some computational
simplifications (discretization of objects, etc.) are still expensive.
Nonetheless, the same patterns are produced whether a large texture or a
small texture are produced.  Thus, the procedure lends itself to rapid
prototyping at low resolution, before a high-resolution texture is
applied.

The author concludes with some suggestions for future work, including
expanding his use of multiple stages of reaction-diffusion simulations and
other types of reactions that when simulated might be useful in producing
different textures.

This paper described a common paradigm we've seen before--providing
efficient simulations of physical phenomena that naturally provide the
graphical feature that is desired.  I believe this article provides a
useful contribution in extending the "toolkit" of techniques for producing
textures.

Stan Sclaroff
Created: Mar 13, 1996
Last Modified: Apr 3, 1996