BU GRS CS 680
Graduate Introduction to Computer Graphics


Readings for January 28, 1997


Participants


Commentary

Alia Atlas


About Feature-Based Volume Metamorphosis

It seems to me that this paper presents a solid advancement in 3D morphing, if a rather straightforward one. There appear to be two clear innovations. The first is a modification for 3D simple types of influence fields and the addition of scaling factors, allowing a feature's extent to be captured in each dimension. The second innovation uses the fact that there is "the exponential dependence of the color of a ray cast through the volume on the opacities of the voxels it encounters". Thus, instead of using a linear interpolation weight function, an exponential weight function is used, to more accurately capture the appropriate colors. The final piece is an approximation algorithm which makes these calculations feasible in reasonable time.



About View Morphing

This paper takes advantage of the notion that all 2D pictures of 3D items can be considered projections, with a given camera view and focal length. Using this observation, the innovation is the ability to interpolate a new view, using two images. Feature matching can be used to determine how to prewarp the images; of course, if the actual projection details are given, that isn't necessary. Because feature matching is adequate, a view can be interpolated and morphed between any two pictures. Surrounding this idea, the paper has nice coverage of the basic concepts of morphing. The examples of image defects that this fixes is clear, as are the new problem of holes, which is suggested as a topic of future work.
Timothy Frangioso

Scott Harrison

Seitz and Dyer -- View Morphing

I am not too familiar with the subject of morphing, but from the examples Seitz and Dyer give, their technique certainly appears to be an improvement upon current morphing methods. That an intermediate image could be produced without knowing anything of the 3D shape of the morphing object still seems counterintuitive to me; I will need to study their mathematics a bit more to fully understand that. I also initially had some confusion concerning their use of control points in the postwarping stage of the view morphiong algorithm: on the first read through, it seemed as though Seitz and Dyer needed to interactively specify control points for many intermediate images (when they really only need to specify them for the I0, I1, and I0.5 images. The algorithm itself takes care of the rest; the process is not nearly as cumbersome as it first appeared).

Lerios, Garfinkle and Levoy -- Based Volume Metamorphosis

Unfortunately, I could not obtain a printout of this article, and so my reading of it was incomplete. (For some strange reason, AcroReader refused to pass it on to the printers.)


Leslie Kuczynski

View Morphing -- S.M.Seitz and C.R.Dyer

Theme: This paper describes an extension to existing image morphing techniques, view morphing, that ensures a natural transformation between objects being morphed. The idea is to obtain a realistic transition without the knowledge of 3D shape. Several examples of the technique are presented as well as examples of morphs without the technique where we can easily see distortions (or geometric bending) in the transitions.

Main Points: The focus of the paper is on creating natural transitions between images rather than on synthesizing arbitrary views of an object or scene. The algorithm presented begins with (1) prewarping two images, (2) computing a morph between the prewarped images using conventional techniques and (3) postwarping each in-between image produced from the warp.

In the prewarp phase the images are warped so that their image planes are parallel. That is, the view points of the images are brought into alignment without changing the optical centers of the cameras. The resulting images are then morphed, this results in an image with a new optical center. The postwarping then transforms the image plane of the new view with the desired position and orientation. Prewarping ensures that the morphed image undergoes a single axis rotation. This is what eliminates the geometric distortions found in images that are simply morphed.

Assumptions: The authors assume that some level of pixel correspondence is provided. Their system allows this to be done interactively by a user.

Problems: A number of issues are discussed in relation to factors that will produce less than desired results. Among those discussed were cases in which visibility of components of the morphed image are not present in one of the two original images. This resulted in holes or folds depending on which image was missing the component. Additionally they discussed the effects of processing the image a number of times (prewarp, morph, postwarp) which results in excess image sampling and degraded results.

Editorial: Interesting paper. The ideas were presented in a structured fashion and well defined. However, they claim their technique works well for unknown objects or unknown scenes. I did not see evidence of this in the results. This claim also seems to conflict with the assumption that a correspondence between pixels is known. Additionally, it was mentioned (twice) that aggregation of the three steps would produce better results but it was not explained or expanded upon. The authors do not discuss changes in lighting and illumination but perhaps this issue is clubbed into the actual morphing which is not the focus of the paper.

Possible Uses: If this could be done in real-time, some possible application areas that might find this technique useful could be in the use of multi-view video (interpolation of scenes between cameras) and in lost video packet client-side recovery. In both cases manual insertion of point correspondence would not be necessary because assumptions could be made due to the nature of video. Of course there will always be a market for this in the entertainment industry.

Feature-Based Volume Metamorphosis -- A.Lerios, C.D.Garfinkle and M.Levoy

Theme: This paper describes a technique to metamorphosize 3D objects using a volume based approach. The idea is, instead of morphing two images of 3D objects, morph the objects themselves.

Main Points: The technique is structured into two separate steps, (1) warping the two input volumes and (2) blending the resulting warped volumes. The claim is that the approach frees one from the difficulties of dealing with lighting and illumination effects as well as visibility effects.

The authors choose to model the 3D object by its volume rather than geometric primitives because they state that volume is independent of geometry and that a geometric description can be converted to a volume representation.

The warping is done using a feature-based approach (they extend the 2D work of Beier and Neeley) whereby a collection of element pairs (correspondences between the two volumes to be morphed) define the overall volume correspondence. Additionally, their system is dependent upon user interaction (identification of element pairs) and they provide a toolkit of shapes with which a user can identify the element pairs. They describe the shapes as acting like magnets which output different forces, thus giving the user different tools by which to describe different types of shape transformations. The collection of element pairs act together to determine the "form" of the morph. Each pair of elements define a field that extends throughout the volume. The collects of elements define a collection of fields. Morphing is performed using interpolation and a weighted average scheme between the element pairs.

Two approaches to blending are presented, (1) linear cross-dissolving and (2) non-linear cross-dissolving. The linear approach is not sensitive enough to dramatic changes in opacity and though a non linear approach (inverse exponential) solves this problem it is inordinately slow. Their solution was to subdivide the warped volume into a coarse 3D rectangular adaptive grid where the granularity of the grid is dependent upon the linearity of the warping.

Assumptions: The assumption is made that feature points are pre-defined.

Editorial: Interesting paper. Not too many implementation details. I would not like to try to implement this from this paper. Although the authors claim that 3D morphing is independent of changes in visibility and illumination, I did not see any examples. Their example results were images that were in the same pose with similar shapes.

Ideas: Instead of morphing between two volumes, perhaps you could provide the user with a single volume in a sort of "spline suit" which could be tugged and manipulated to deform the original volume into a new one. Or, if you had two different volumes with one in a "spline suit" a certain amount of pushing or pulling would pull or push the volume into the domain of the other.


Shih-Jie Lin
In this week, we read two papers about morphing :
View morphing and Feature-based Volume Metamorphosis

(1) View Morphing

View morphing is a simple extension of image morphing that uses basic principle of projective geometry and no knowledge of 3D shape is required.View morphing requires two images of the same object , their respective projection matrices , and a correspondence between pixels.Bending, holes, and folds can arise with image morphing techniques in the in-between images. View morphing can avoid these types of distortions if we have a constant visibility. Changing in visibility will cause ghosting effects with view morphing techniques.The result of view morphing technique is really surprising me but if we can extend view morphing technique to handle extreme changes in visibility we can get more accurate rotations.

(2)Feature-based Volume Metamorphosis

Tis paper discusses how 3-dimension metamorphosis applied to volume-based representations of objects. The morphing method , volume morphing, discribed in the paper has two steps : first, warping two input volumes, second, blending the resulting warped volumes. The warping is feature-based and allow fine user control. This ensure realistic looking intermediate objects. The blending guaratees smooth transitions in the renderings. Feature elements are used to identify the features of an object. In this paper , we have four types of elements ,points, segments, rectangles and boxes. Using feature-based volume morphing, we can have fine user control, smooth morphs, speeding up warping and correcting the ghosting problem of image metamorphosis. If we can improve the warping method, we can get finer user control, smooth interpolation of the warping functionacross the volume.We also can add more feature elements to get smoother and more accurate 3 dimension morphs.
Geoffry Meek

Paper #1
View Morphing

The goal of the paper is to produce realistic view transformations using only 2D images as input. The paper accurately describes the problems with traditional 2D image morphing where serious perceptual 3D flaws can happen when the source and target are similar images. View morphing is very clever in the way the authors use two images of the same object to be morphed, but in different spatial orientations. From these two images, the authors essentially create 3D information that produce realistic view transformations. My biggest problem with this is the creation of information. Although the views will "look" realistic, they will not be realistic. For example, this may not be a good way for police to create a profile image from two full-face images, because distinguishing characteristics may be lost (scars, and other details hidden in a full-face shot). But if the integrity of the information in the image is not important, then this method accomplishes the intended goal.

Paper #2
Feature-Based Volume Metamorphosis

The goal of the paper is to find a good way of doing image morphing. The authors point out that the first decision is 2D vs. 3D. 3D wins easily because, simply, there is just more information about the image. Particularly, 2D images contain no explicit spatial information, where 3D models do. The next step was to decide what type of 3D models to use, Volumetric or Geometric. I think that choosing the volumetric model is a well-suited solution for morphing because volumetric data is more-or-less raw data, stacked into a 3D grid, that can be easily processed for morphing. Although, I think a geometric model would be good at the scaling and spatial transformations required in morphing.

The authors approach to morphing is a two-step process, warping and blending. Warping for the authors is a manually controlled step, where it is basically an artistic process of choosing and mapping feature elements between the source and target models. The feature elements are defined, points, segments, rectangles, and boxes. My feeling is that having a well-defined set of feature elements is necessary, but an interesting twist would be to have an arbitrary volumetric feature element that the user could specify the mapping between source and target (this may be used more for art's sake than realism, I realize that realism is one of the authors' goals) it may help in the "hard-to-morph" situations, but it would be an extremely tedious computation.

Before moving on to blending, the user interface for controlling the morphing seems very good, but my question would be, "What happens when we try to automate this stage?" Automatic selection of feature element pairs may not yield good results for extreme cases (a shoe and a human head), but for similar featured models (human - human) it may be possible.

Blending is an automatic stage where the source and the target models, already warped, are cross-dissolved in 3D space. This seems effective, but I wasn't clear on how the color mixture and remapping is implemented.

As a final step the authors introduced some nifty speedup techniques for rendering the morphs.


Romer Rosales

View Morphing

Article Review

This paper discusses a technique for creating morphed views based in two different views of an object. Projective geometry is mainly used to produce image morphing, which according to its authors, correctly handles 3D projective camera and scene transformations, even though it only uses 2D image transformations.

This is basically an extension to the current techniques that can handle changes in viewpoint. It can give the effect of interpolate view, color and shape .

Basically it works by first prewarping the two images prior to computing the morph and then postwarping the interpolated images, this step needs to be controlled by using user-defined points (features).When certain considerations are taken, the method can preserve 3D shape, giving the idea of a rotation and translation in 3D between the views in the two initial images.

In general i think that it can be very useful to compute views or other 3D effects from basic 2D information. It can reduce computational time, formulation and implementation is simplified with respect to a 3D representation. One good property is that It can be applied to any 2D image, it does not use 3D information. Also, a set of 3D effects or shape deformations can be achieved by definining in different ways what a view means.

I do not know how it handles the problem of lighting, I think that views does not get the correct light effect, changes in visibility and ghosting are other problems. In general the results were really good, a 3D effect is achieved, at least in the test cases

Feature-Based Volume Metamorphosis

Article Review

This paper presents a technique for morphing objects using their 3D volume-based model. It also disc cusses some topics related to volume morphing in general.

It defines two main components. Warping: an extension of a known technique which is feature-based and allows user control, blending used for smooth transitions in the rendering.

3D morphing (with respect to 2D) is independent in the viewing and lighting parameters. It can also handle changes in illumination and visibility. Volume morphing is independent of the model geometric primitives and topologies. Geometric descriptions can be converted to volume representations in an easy way. The opposite is not always efficient.

With respect to realism and smoothness in the transition or intermediate volumes, it is so difficult to match features from the source volume to the target volume that user interaction is necessary at this level.

The solution here presented is based in two steps: warping and blending.I could not follow then in detail, but in general: warping uses a feature-based approach based on previous 2D work with the same approach. It uses a pair of elements per feature, the one on the source volume should be transformed to the other on the target volume. Many features are matched to obtain a good morph. Every pair interact like magnets shaping a new volume which generate interacting fields that shapes the volume. Blending which works on the mismatches produced by the warping. In order to produce a smooth transformation they have to be smoothly faded in/out in the sequence. A full 3D approach is used in which the volumes are cross-dissolved themselves. A linear and non linear time function for interpolation are discussed. The non-linear approach (using a sigmoid curve) is then used to compensate for the exponential dependence of rendered color on opacity.

Although it is computationally expensive, I think the approach can work very good on complicated surfaces. The number of pairs of features defined is going to influence the quality of the morphing, but it can create performance problems due to the fact that each point in the warped volume is influenced by all the other elements.

I could not see how lighting and oclussions problems were solved in detail. I think that light is considered a separate independent element in the model and it is applied to the intermediate states in the process.

I think that the errors and problems found in previous 2D approach are solved with this technique. I also think that it could be very useful to define an algorithm to identify correspondence between volumes and automatically generate the features that are going to be matched. This can directly deal with the problem of perception in the visual system.


Lavanya Viswanathan

1) S. Seitz and C. R. Dyer, View morphing. In Computer Graphics Proceedings, ACM SIGGRAPH, pages 21--30, 1996.

This paper describes a method of morphing called view morphing. Current image morphing techniques create effective image transitions between a source image and a target image but they do not ensure that these transitions appear natural. The authors demonstrate this fact by considering the problem of producing a morph between two 2D perspective projections (i.e., plane images)of a clock image. By using standard techniques of image morphing, the intermediate morphs that are produced are distorted curved images. Thus, the morph distorts the shape of the original image and the transition does not appear natural. To overcome this drawback, the authors suggest a morphing technique that operates in three stages: prewarping, creating the morph and postwarping the given image.

One main advantage of this technique is that it needs no information about the 3D shape of the object in the image. It operates completely on the information contained in the two dimensional source and target images. However, the authors do say that the algorithm requires that the projection matrices of the source and target images be available; thus in a way, it requires that some information regarding the position of the camera for each of the two input images be known. For a general real world application, this information may not be readily available, and ideally one would like the morphing of two images to be possible even without this knowledge.

Since no knowledge of 3D shape is required, the algorithm can be applied to drawings and artificially rendered scenes as well and this is a very desirable feature.

However, view morphing is very sensitive to changes in visibility and this must remain constant in all the images presented as inputs to the algorithm for best performance.

2) A. Lerios, C. Garfinkle and M. Levoy. Feature-based volume metamorphosis. In Computer Graphics Proceedings, ACM SIGGRAPH, pages 449--456, 1995.

This paper describes a method of performing feature based volume morphing. Volumetric data sets are more accurate than geometric representations because the conversion of volumetric spatial information into geometric primitives invariably involves some error and if these primitives are used for morphing, the errors could propagate themselves and this would be undesirable. Further, volume based morphing requires no representation of object geometries and hence no restrictins need to be imposed on the objects for successful morphing. The problem of volume morphing is one of producing a smooth and realistic transition between a source volume and a target volume such that all essential features of the source and target are preserved. Feature based morphing ensure that certain features of the source volume are morphed onto corresponding features of the taget volume. For instance, if the source volume is a dart and the target volume is a plane, it ensures that the nose of the dart is mapped onto the nose of the plane. Thus, feature based morphing requires the user to specify the correspondences between features of the source and target volumes.

Although the technique described in this paper is very interesting and useful, it requires extensive input from the user. This could be considered both a feature and a bug. The advantage of this is that the user can control various aspects of the morph and thus more flexibility is ensured. On the other hand, it could be painstakingly long and tedious for a user to provide all the information required.

The algorithm proposed needs a long running time to produce the desired results. It took 24 hours to produce one of the morphs shown in the paper. But the authors say that this time can be reduced by a factor of 50 by using an effective and adaptive approximation.


Stan Sclaroff
Created: Jan 21, 1997
Last Modified: Jan 30, 1997