BU GRS CS 680
Graduate Introduction to Computer Graphics


Readings for March 25, 1997

  • K. Perlin. An image synthesizer. In Computer Graphics Proceedings, ACM SIGGRAPH, pages 287--296, 1985.

  • G. Turk. Generating textures on arbitrary surfaces using reaction-diffusion. In Computer Graphics Proceedings, ACM SIGGRAPH, pages 289--298, 1991.

  • S. Worley. A Cellular Texture Basis Function. In Computer Graphics Proceedings, ACM SIGGRAPH, pages 291--294, 1996.

    Participants


    Commentary

    Alia Atlas

    Paragraphs about Papers for CS680 for 3/19/97

    by: Alia Atlas





    Particle Animation and Rendering Using Data Parallel Computation

    This paper presents a system and method for manipulating particles and rendering the results into images. The method is general enough that the particles can be manipulated using a physics based system, can each be specifically placed by the animator, or a mixture of the two. The problem with physics based systems is that it is challenging to produce the desired image, since the objects are manipulated using forces and such indirect methods that the results are not obvious. However, having the animator specify the motion of the particles is also not ideal, because such motion doesn't tend to look realistic. Particularly for the particle systems, such as snow, water, and fire, which the paper discusses, a combination of physics and direct manipulation is desired.

    The animation system which was used in this paper made several simplifying assumptions, so that the behavior isn't as complex as that from a physics based system, however, the results looked convincing. To facilitate different behaviors, the authors implemented vortexes, spiralling, damping, and bouncing; thus tornados and waterfalls can both be animated. However, there is no information shared between the particles, and so their behavior isn't influenced by other particles. That would definately be a good expansion.

    As this was implemented on a CM-2, a particle rendering method was also devised to work in a data parallel manner. This method also permits polygons and other objects to be rendered along with the particles. The speed of the processing is fast enough that near real-time viewing is possible with a quick vector display. This seems like a useful method, but it is unclear to me that the complexity of animating a waterfall or fire has been much simplified. Presumably, the animator could set specifics for groups of particles.



    Flow and Changes in Appearance

    This paper describes a method for imposing weathering upon modeled structures. Only weathering due to water is considered. In order to have realistic images, it is necessary to have some method of creating a weathered appearance on buildings. Weathering is a result of interaction between the building and the environment. Incident water is a major cause of weathering. The water can wash surfaces clean, and deposit dirt and other materials on other surfaces. The interaction between the water and the surfaces depends upon the geometry, type of material, and the weather.

    In order to faithfully recreate such effects, the water is modeled as seperate particles, or water drops. This permits flow like behavior to appear. The incident water particles are initialized to their locations and masses. Then, a purely physics based simulation is run, which permits gravity, friction, absorption and such to act upon the water and the surface materials. In particular, the simulation allows for the water to pick up material from one surface, and to deposit it upon another. Similarly, streaking is possible, because once a water drop has been absorbed, all of the material that the drop was carrying is deposited on the surface. Naturally, when and where other particles are absorbed depends upon how much has already been absorbed.

    This simulation is sufficient to create many of the desired weathering effects. The modeled structure is given as a series of patches, with the connections between the patches, so that a continuous flow can be maintained across patch boundaries. This work allows weathering to be controlled semi-automatically, and signifcantly more quickly than would otherwise be possible. This seems like a very promising beginning. As the paper itself says, a comparison between the virtual and the real indicates that there is still much future work to be done.


    Timothy Frangioso

    Scott Harrison

    Leslie Kuczynski

    "An Image Synthesizer", by Ken Perlin

    This paper presents a "fast" method for creating images of textured solids. Additionally, the authors introduce an intuituve editing language (the Pixel Stream Editing language, PSE) to be used in conjuction with their system.

    The main idea behind the texture composition is that a rich set of visual textures can be achieved through the composition of "well behaved", stochasitc, non-linear functions. Each pixel in an image is operated on independently of the other pixels in an iterative fashion until all pixels in the image have been "visited". Additionally, the algorithm is parallelizable.

    The authors discuss the concept of the "solid texture". This seems to be a key concept in the system, whereby textures can be created without the use of traditional texture mapping techniques. The solid texture is created by applying the composited function (which varies over three dimensions) to each visible surface point (pixel) in the image, resulting in a surface texture (for that object) similiar to if the object were "sculpted" out of a material with that texture.

    Benefits of the proposed method over traditional texture mapping include (1) shape and texture become independent (i.e., they are not fitting the texture to the surface, which implies that the surface can be changed without worrying about changes to the alogorithm) and (2) the database is small due to the fact that only functions corresponding to particular textures need to be stored rather than the texture itself.

    Not much emphasis was given to how to compose the functions, or for that matter, what the appropriate functions were, except in the case of the "noise" function which was briefly introduced.

    "A Celluar Texture Basis Function", by Steven Worley

    Motivated by Perlin's noise basis function, the authors of this paper propose a set of related texture basis functions to be used in procedural texturing. The functions are based on "scattering" feature points throughout three dimensional space and building a scalar function based on the distribution of the points.

    A key point is how the feature points are distributed. The authors suggest that the simplest distribution is a Poisson distribution due to properties such as independence of feature points. The method is simple and consists of computing the "nth-closest point" to the feature point from any point in three dimensional space. What results are a set of functions Fn(X), where F1(X) is the first closest point, F2(X) is the second closest point etc... The Fn(X)'s return scalar values which can then be mapped to to colors or bumps. Additionally, functions composed of linear combinations of the Fn(X)'s can be used to provide additional interesting results.

    The computation of the Fn(X)'s is rather straight forward.  The authors divide the three dimensional space up into cubes and then compute the number (and distances from the evaluation location x) of the feature points within a cube. Since a neighboring cube could potentially contain closer feature points, all neighbors are checked as well.

    The results are impressive and the method elegant.

    "Generating Textures on Arbitrary Surfaces Using Reaction-Diffusion", by Greg Turk

    Turk presents a method for texture creation that is biologically based. The method is based on reaction-diffusion which is the process by which two or more chemicals diffuse at different rates over a surface and react with each other to form stable patterns (such as spots, stripes, etc ...). In addition to simulating this process to produce textures, Turk presents a method of producing more complex patterns based on cascaded reaction-diffusion processes. This results in simulating complex patterns such as the spots on a lepord or a giraffe.

    In addition to defining textures, Turk also introduces a method by which reaction-diffusion textures are created to match the geometry of an object. This is done by enclosing the object in a mesh and then applying the reaction-diffusion technique directly to the mesh. Conceptually, the mesh can be visualized as a collection of cells and the reaction-diffusion techniques is applied to neighborhoods of cells (i.e., diffusion into neighboring cells).

    Basically what is presented is an alternative method to texture mapping. The need exists due to flaws in traditional texture mapping techniques. For example, a texture defined in one domain could be distorted when it is mapped to another (a mapping of texture coordinates (u,v) must take place to polygon vertices in the object ) and the problem of matching edges on multiple contiguous patches.

    A problem with this method is that it is computationally intensive and requires the use of a powerful computer to allow any type of texture experimentation.


    Geoffry Meek

    Romer Rosales

    An Image Synthesizer

    Ken Perlin
    (Article Review)

    This paper shows an approach to the design of "realistic "Computer Generated imagery algorithms. All this is done by a high-level programming environment created by them, it works at a pixel level.

    This technique basically builds natural visual complexity from composing non-linear functions which are enhanced with stochastic effects.

    They realized that combinations of well behaved stochastic functions, produced a very rich set of visual textures.

    They found it unnecesary to try again and again different functions combinations, so they developed the PSE language which is: a filter which converts input images to output images by running the same program on every pixel.

    Before them researchers had worked with procedural texture with functions on a 2D domain.

    This paper extends this to texture-functions (functions that represent texture and can be assigned to surfaces) on a 3dimensional domain (what they call a space function).

    This kind of functions may represent a solid material (instead of making a surface to represent the texture).

    So that when evaluating this function on the visible surface points of an object then we can obtain the surface texture that would have occurred if we had we sculpted the object out of the material. They call it a solid texture.

    In this way the surface does not need to be fit onto the surface . Changing the shape of the surface, the appearance of the solid material would change also.

    They mentioned that this approach is a superset of 2d texture mapping techniques. It is like projecting from 3d to 2d. They developed some primitive stochastic functions to improve the visual complexity and maybe the realism of the objects. Noise (invariant under rotation and translation) . They developed some different noise functions

    One of the implementation of these functions defines a integer lattice as the set of integer points on the space x,y,z. Each point define its gradient (with respect to x,y,z) and a pseudo random value. The function at points that are not in the integer lattice are interpolated using a cubic polynomial.

    Dnoise provides a way of specifying normal perturbation. So we can modify the normal at every point with a noisy function and obtain new effects (in figure)

    In the examples they assume that they know the color and also the normal on each point of the image.

    They have an example where they use noise to generate turbulence and then to simulate marble. Then they create fire using turbulence.

    This approach does not offer more efficient algorithms but the ease of try approaches quickly. We normally want to know what the image will look like before doing any other work.

    They mention that it is possible to use the same idea of composition with stochastic functions for motion and shape modeling.

    They applied this technique to model basic stochastic motion appropriate whenever a regular, well defined macroscopic motion contains some stochastic component. They also created interesting stochastic shapes with this approach.

    It is important how easy they created some of these effects, a lot of effects shown here would have been more difficult to generate using other techniques. In general the idea is simple but useful, although I think that it is necessary to test a lot of parameters before achieving the expected result. It seems that it is easy to apply the same solid texture to different objects but I am not sure how it would work, specifically how similar the effect would be. The solid texture method can be very useful in texture algorithms.

    Generating Textures on Arbitrary Surfaces Using Reaction-Diffusion

    Greg Turk
    (Paper Review)

    This paper discusses procedural methods for texture generation. It is biologically motivated, it is based on a chemical mechanism for pattern formation called reaction-diffusion. In general it refers to how several chemicals that diffuse and react to every other can form stable patterns. The idea is to create complex patterns by using an initial pattern produced by a chemical system and then modify it by using new chemical systems.

    The generation of this pattern is done using a method for fitting texture to surfaces that is also introduced in this paper. So the pattern is not generated on a square grid but it is synthesized directly on the surface. In general it shows how these textures are generated in such a way that matches a given surface geometry.

    The technique mentioned generates a mesh over the surface, which is used for the texture generation. This avoids using complex tasks to assign texture coordinates to complex surface.


    Lavanya Viswanathan

    1) K. Perlin. An image synthesizer. In Computer Graphics Proceedings, ACM SIGGRAPH, pages 287--296, 1985.

    In this paper, the author proposes a language and a platform for synthesizing interactively highly realistic computer generated imagery. Visual complexity is built up by composition of non-linear functions and also involves primitives for creating stochastic effects. The author goes on to claim that the algorithms in this image synthesizer are fast, realistic, and asynchronously parallelizable. These are mighty claims in the field of texture production but the authors substantiate this claim by mentioning that in all the cases shown in this paper, the low resolution interactive design loop took between 15 seconds and 1 minute per iteration. This looks like a much better performance statistic than the one mentioned in the G. Turk paper where each texture took several hours to implement. These statistics sound very impressive and, as the author says, this gives the user the freedom to try out new approaches quickly and painlessly.

    2) G. Turk. Generating textures on arbitrary surfaces using reaction-diffusion. In Computer Graphics Proceedings, ACM SIGGRAPH, pages 289--298, 1991.

    This paper describes a method of generating textures on a surface using an idea from developmental biology called reaction diffusion. This method is used by developmental biologists to investigate how the cells of an embryo arrange themselves into particular patterns or segments. The author adapts this technology to the field of computer graphics very nicely to create a powerful technique for texture mapping. Reaction diffusion is a process in which two or more chemicals diffuse at unequal rates over a surface and react with one another to form stable patterns. Cascades of such reaction diffusion systems can be produced to generate more complex textures. One advantage of such a technique is that the textures are developed directly on the surface of the object under consideration thus obviating the need for mapping the texture onto the surface after generation. This results in greater efficiency and a reduction in the time required for texture production. Since the time scales involved in texture generation are large, such an improvement is significantly useful.

    3) S. Worley. A Cellular Texture Basis Function. In Computer Graphics Proceedings, ACM SIGGRAPH, pages 291--294, 1996.

    In this paper, the author proposes a basis function approach to the problem of texture generation. This basis function complements the Perlin noise basis function used earlier and is based on a partitioning of space into a random array of cells. Further, this new basis function does not require precalculation or table storage and thus reduces both time and space requirements of the algorithm, thus increasing efficiency. The results shown by the author for this method are very impressive.


    Stan Sclaroff
    Created: March 27, 1997
    Last Modified: March 27, 1997