BU GRS CS 680
Graduate Introduction to Computer Graphics


Readings for January 28, 1997


Participants


Commentary

Alia Atlas

Progressive Meshes

This seemed like a very progressive paper, with a good idea carried to its logical conclusions. Basically, the authors have a simplification method which can transform any given mesh into either the lowest complexity desired, or as simplified as to retain the innate character of the mesh. The key is that each step of simplification is invertible, and each step, or "vsplit" record, is stored. The simplified mesh can be enhanced by these vsplit records to form intermediate meshes, and to retrieve the original mesh. Moreover, the addition of a vsplit record depends only upon certain vertices pre-existing in the mesh; other than that, the vsplit records can be added in any order.

These features provide the Progressive Mesh representation with many applications. Naturally, transmitting the simplified mesh and then the vsplit records allows the receiving computer to start with the simplified mesh and display more detail as it is received, in a smooth manner. Similarly, the mesh can be selectively enhanced in user-specified areas to provide more detail. The paper also provides a nice method of geomorphing from one mesh to another, and properly handling the scalar attributes when doing so.



Simplification Envelopes

This paper basically presents a method of creating an envelope around a given model. This is a tool for use with approximate simplification, where the approximate simplified model must be within certain error bounds. Basically, the simplification envelopes enclose the area of space within which the error bounds are satisfied. Once the simplification envelope is calculated, it is easy to test approximate simplified models to see if they are within the simplification envelope. That is simply a familiar polygonal surface intersection problem.

Once the paper has developed the simplification envelopes, it presents two different algorithms which encorporate them and do local or global approximations. It's hard to judge how useful the simplification envelopes actually are, since I'm not sure how hard the general problem of testing the error bounds on an approximate model are. It does seem that this method calculates the boundary between an acceptable region and a violating region and then stores the information in an easily accessible and managable way for use.


Timothy Frangioso

"Progressive Meshes" by Hugues Hoppe

This paper introduces a new representation for Progressive Meshes (PM). This new technique includes storing and transmitting arbitrary meshes. They goal of this technique was to solve 5 major problems. They are the following: mesh simplification, level of detail, progressive transmission , mesh compression and selective refinement. The paper highlights two contributions the progressive mesh representation and the simplification algorithm for constructing the progressive mesh from any given mesh.

The progressive mesh takes advantage of the fact that a single edge transform can perform all the simplification that is needed. The edge collapse transforms sufficient for simplifying meshes. The key observation is that an edge collapse transform is inevitable. It's inverse is the vertex split. This is the basis of the progressive mesh. With this the representation is straight forward. For the representation is created from a series of edge collapses. The progressive transmission is they accomplished quite easily by sending the images down one at a time and merging them together by geomorphing. One can use this to control where the detail is being added because all that you have to do is specify which areas of the mesh that need expansion.

This technique might be able to be used for another purpose than what it was intended for. It could be used to create very simple models of complex objects and then use some way of guessing where detail should be added by some set of user defined constraints.

"Simplification Envelopes"

This paper proposes an technique for creating various level of detail approximations for polygon models. These approximations of detail can be set by the user or configured automatically. This technique is demonstrated by two algorithms. One is a local algorithm, that maximizes speed , and the other a global algorithm which maximizes accuracy and avoids problems of local minima.

This problem is basically one of minimization. This technique seeks to find the minimum number of polygons required to satisfy the constraints that have been specified. There are 5 major advantages to such an approach. They are the following: general approach, automation of simplification process and selection of viewing distance, prevention of self intersections, preservation of sharp edges, and it allows variation of approximation distance across different portions of a model. The benefits of such an approach are that you can quantify the amount of approximation that is tolerable under given circumstances and this allows fro control over which regions need to be approximated more or less. As was demonstrated in the paper with the bunny.


Scott Harrison

Leslie Kuczynski

"Progressive Meshes", by Hughes Hoppe

Hoppe presents a method for the representation of three dimensional meshes supporting such things as progressive transmission, compressing and selective refinement. Additionally, he presents a method for constructing the progressive mesh, given an arbitrary mesh as a starting point.

A key feature of Hoppe's method is that the process is lossless. That is, the original mesh can be exactly reconstructed, given the correct information. The method consists of selecting edges within a mesh and collapsing them so that two vertices effectively become one. Simplification is achieved by applying a sequence of edge collapses and storing parameterized records of the transformations until an m vertice mesh has been reduced to an n vertice mesh. The n vertice mesh can then be sent over a communication line along with the record of transformes and exactly reconstructed. Another key feature is that the edge collapse transformation is invertible. This is what allows exact reconstruction. A point emphasized by Hoppe was the critical nature of selecting which edges and sequences of edges to collapse. He discussed such things as random selection to more complex methods involving energy minimization techniques. The approach used by Hoppe concerns itself with minimizing and energy function that will in essence attempt to preserve the attributes and geometry of the mesh. Hoppe approach attempts to minimize both the distance of the approximating mesh from the original as well as the number of approximating vertices.

The progressive mesh fits well into our current framework. One major domain that could take advantage of such a technology is the Internet. In such a heterogeneous environment, bandwidth contraints become a major concern. Progressive meshes can allow even those people at the end of a slow connection to recieve 3D graphics (VRML). One can imagine multicasting the meshes, for example in an interactive, networked 3D game. Different levels of refinement could be multicast on different channels and users could iteratively add channels until they reach their maximum bandwidth utilization. Basically the idea is just using progressive meshes in layered multicast.

"Simplification Envelopes", by J.Cohen, A.Varshney, D.Manocha, G.Turk, H.Wever, P.Agarwal, F.Brooks and W.Wright

The authors present a method for the construction of multiple levels-of-detail models from a single polygonal model. The method works only with polygonal models where each edge is adjacent to two triangles and where each vertex is surrounded by a single ring of triangles. The model does not support degenerate meshes. Additionally, the method does not produce a hierarchy of models that are independent of one another.   This is not a desirable feature due to the fact that the usefulness of the model is decreased significantly in applications such as progressive transmission over a network. However, this is noted by the authors and presented in their section about future work.

The basic idea behind their method is to create envelopes (or surfaces), one inside and one outside the model, where both envelopes are guaranteed not to exceed a user specified distance from the original model. The simplification is then performed within the volume generated by the two envelopes. Two algorithms for achiving this are presented, one local and one global. The local algorithm operates on a vertex removal algorithm where all triangles surrounding a given vertex are removed to create a hole. The hole is then filled with a simplified triangle composition. If the hole is not successfully filled the process is deemed a failure and the vertex remains unchanged. The global algorithm works by generating all possible candidate triangles for an approximating surface to the model. Each triangle is tested to see how many vertices it covers in the original model. The triangles that result from these vertices are removed from the model and then the resultant hole is filled with the candidate triangles from the approximating surface.

Future work includes (mentioned above) correspondence between the levels in a hierarchy, and relaxing the constraint that the vertices in an envelope are subsets of vertices in the original model. This may allow for further simplification of the original model.


Geoffry Meek
Simplification Envelopes

The authors present an effective way to simplify overly complex surfaces within a user-specified error margin. The basic idea is to create an evelope--two surfaces, one inside and another outside, around the original surface. The first problem with creating the envelope is preserving the topology of the original surface when creating the inner and outer surfaces. Commonly, triangles will overlap, which tends to distort the topology of the surface (I imagine that this happens more for the inner surface). The authors generated something that I don't know about called a Voronoi diagram, which is not easy (according to them). The result is a topologically correct inner and outer surface. Note the importance of the preprocessing.

Now, to start the simplification, all of the vertices are queued up for the local algorithm. This works by simply removing a vertex, thus creating a hole. All possible ways to triangulate the hole are calculated, and tested to see if the result will intersect the envelope (pretty spiffy). The next stage is the global algorithm, which is not described very well here, but basically it generates large triangles that don't intersect the envelope based on the approximation vertices. Then it tries to fit it in the approximation surface, filling the rest of the gaps if it can.

This all seems to work very well, and is extremely useful. The authors claim that the global algorithm is slow and works best on small models. This is even better because the user can specify the error percentage, and the algorithm guarantees not to go over that. They go on to talk about adaptive simplification for even more control. Moving vertices would be the next step to explore.


Romer Rosales

Progressive Meshes

Huges Hoppe
(Article Review)

This paper presents the progressive mesh representation, an scheme for storing and transmitting arbitrary triangle meshes.

This is a continuos-resolution representation, it is also a loss-less technique. Issues such as smooth geomorphing of level-of-detail approximations, progressive transmission, mesh compression, and selective refinement are considered in this paper.

They also present a mesh simplification procedure for constructing a PM from any mesh. With this, they tried to preserve geometry and the overall appearance as defined by its discrete and scalar appearance attributes (associated with the surface): material identifiers, color values, normals, texture coordinates.

In general, the complex meshes that result from modeling systems, create some problems, they are expensive to store, transmit, render. Some of the issues related to these problems are:

They are not normally optimized for rendering efficiency, they can be replaced by very perceptually-similar meshes with far fewer faces.

Level-of detail-approximations are very important and desirable, the closer the more polygons, (some approaches abruptly change the l-o-d) it would be nice to have smooth visual transitions (geomorphs) between meshes at different resolutions.

For progressive transmission, it would be good to show progressively better approximations to the model as data is incrementally received.

Mesh compression, minimizing storage space.

Selective refinement, have to do with techniques that allow for an adaptive representation of the objects, more detail where it is needed, or where attention need to be addressed, this can save space, render time, giving the same perceptual result to the viewer.

The technique consist in storing a mesh M as a much coarser mesh M0, with a sequence of detail records that define how to incrementally refine M0 to get the same initial mesh M (loss-less). Each of these records contains the information related with a 'vertex split', which adds one additional vertex to the mesh. So LOD approximations of any desired complexity can be retrieved efficiently. Geomorphs can be constructed between any 2 meshes also.

They show how to construct the PM preserving overall appearance defined by the discrete (associated with faces of the mesh: for example material identifier) and scalar attributes (diffuse color, normal, texture coordinates) associated with the surface.

In general the technique applies a transformation, that they called 'edge collapse', to effectively simplify the mesh (Figure 1-2). So an initial mesh can be simplified into a mesh M0 by applying n successive "edge collapse" transformation. Choosing the right sequence of vertex is the most important part, so they developed a scheme for the edge collapses. "Edge collapses" are invertible, this allows the incremental formation of a more accurate representation using "edge split" (the opposite operation).

Geomorphs or a smooth visual transition between successive meshes can be obtained easily by linearly interpolating points in the triangles that form the 2 consecutive meshes, using a blend parameter alpha to control the transition. This can be done with the position attributes (polygon vertices), but this can also be applied to both discrete and scalar attributes (this last one do not need to be linear).

Progressive Transmission is a natural property of this technique, geomorphing can be used to achieve a smoother effect if necessary. They pointed that for example with a slow communication line, we can display the current mesh at the level of detail that has been received when the input buffer is empty. With a fast line, it is better to display meshes whose complexity increases exponentially (similarities with JPEG).

Mesh compression: Their system was not designed but for example. This technique needs to encode M and their approximations but using delta encoding (because vertex split has local effect, significant coherence in mesh attributes can be expected) it is possible to achieve a reduction in storage space.

For the mesh construction, intermediate approximations depend a lot on the algorithm for selecting which edge to collapse and what how to modify its neighborhood. They argued that because the PM construction can be preprocess (off-line) they chose to design a simplification procedure that invest some time in the selection of the better approximation.

This is an optimization problem that can be stated as to find a mesh that both accurately fits a set of X points and has a small number of vertices.

They used a previous minimization procedure using an energy function E(M)= E_distance + E_representation + E_spring. The distance energy term measures the total distance of the points from the mesh, the representation term measures penalizes the number of vertices in M.

They use another energy metric (section 4.2), they added E_scalar which measures the accuracy of its scalar attributes, and E_disc measures the accuracy of its discontinuity curves. They based the solution for this optimization problem in the fact that a mesh can be simplified using only edge collapse transformations.

The discussed how to preserve surface geometry, scalar attributes and how to preserve discontinuity curves, this last is done by detecting when a candidate edge collapse would modify the topology of the discontinuity curve. They presented some test for this detection.

But some meshes contain too many discontinuity curves, that can be too small to be visible form a distance. In these cases it is not necessary to maintain the restriction of preserving discontinuity. So they used a technique that allows these changes but penalizing them.

In the test-objects, they run the algorithm until no legal edge collapse are found. Figure 10 show a selective refinement of a terrain taking in consideration some parameters to choose where to refine. The only scalar attribute they optimized was color.

In general it is very useful to have this properties in any polygonal representation of objects, it solves in part some of the problems explained initially. It seems a little slow to compute the progressive mesh, which restricts its use in certain cases. This alternative representation at different levels can also facilitate the application of complex transformations, also the computation of geometric properties that do not need to be very accurate can be obtained faster at the appropriate level.

For machine vision, these approximation techniques applied in 3D or 2D may (like the simplification envelopes) facilitate image analysis of extracted objects and may help simplifying or reducing the complexity of the problem of matching objects or correspondence problems ( we can use a simplified (good, according to some criteria) approximation of an object).

Simplification Envelopes

Cohe, Varshney, Manocha, Turk, Weber, Agarwal, Brooks, Wright
(Article Review)

This paper proposes a technique for generating a hierarchy of level-of-detail approximations for a polygonal model. This allows the computation of various level of detail for a given polynomial representation of an object. Here, the generation of minimal approximations preserves global topology. It works by surrounding the original polynomial surface with 2 envelopes, and perform simplifications within this volume.

It is known that computing minimal-facet e-aproximations is NP-hard, so polynomial time approximations have been investigated. They concentrate on allowing adaptive, genus-preserving e-aproximation of arbitrary polygonal objects.

According to them , it meet the constraint that all points of the approximation are within a distance (specified by the user) from the original model, which is a nice property for controlling the error in the approximation when it is necessary.

To compute an envelope surface numerically, initially the envelope is identical to the input model surface, in each iteration, they attempt to move each envelope vertex a fraction of the e distance in the direction of its normal vector for the outer envelope and the opposite for the inner envelope. This stretches or contracts all the triangles adjacent to the vertex. Then these adjacent triangles are checked for intersection which each other and with the rest of the model. If there are no intersections, the step is accepted, otherwise the vertex is moved back to its previous position. This is repeated until all vertices have moved e or cannot move any further. Some details like initial step size for vertices, te computation of an adaptive step size, considerations about the size of the object, are also considered.

The approximation is generated by first creating an hole by removing some connected set of triangles from the surface, then filling the hole with a smaller set of triangles, this provides some reduction of the mesh complexity.

They showed 2 algorithms, one local which provides a fast method for generating approximations to large inputs, and a global one which provides the opportunity to avoid local minima and can possible achieve better simplifications as a result. The local algorithm makes only local choices creating small holes, the second uses global information about the surface to create maximally-sized holes.

Some properties are discussed, preservation of sharp edges or normal discontinuities, they considered bordered surfaces and created an improved method for them. The continuous adaptive approximation they used is very interesting and important, using it, different levels of details are applied when they are necessary, this could be approached using human perception information.

The benefits were mention: error control, automation of simplification and selection of appropriate viewing distances, prevention of self-intersection, preservation of sharp features. The uses in graphics are many, performance issues and optimization are the more important. Something important was the level of self-regulation of their technique, only one user-specifiable parameter is needed, also it is rotationally and translationally invariant. For machine vision, these approximation techniques in 3D or 2D may facilitate image analysis of extracted objects and may help in matching objects or correspondence problems.


Lavanya Viswanathan

1) H. Hoppe. Progressive Meshes. In Computer Graphics Proceedings, ACM SIGGRAPH, pages 99--108, 1996.

This paper proposes a progressive mesh (PM) representation for efficiently storing and transmitting arbitrary triangle meshes. It also presents a mesh simplification procedure for constructing a PM representation from an arbitrary mesh. The method proposed in this paper has several advantages:

(a) it allows for the efficient retrieval of level-of-detail (LOD) approximations of any desired complexity,

(b) geomorphs can be efficiently constructed between any two LOD meshes constructed in this fashion,

(c) the PM representation naturally supports progressive transmission, through which the originally mesh is transmitted first, followed by n detail records providing different degrees of detail in the mesh. As new detail records are received, the receiving process can incrementally rebuild the original mesh and animate the changing mesh. Further, the changes to the mesh can be geomorphed to avoid visual discontinuities. Since the PM representation is a lossless representation, the original mesh can easily be recovered from the final mesh, if so desired.

(d) it offers a concise encoding of the original mesh,

(e) it permits selective refinement of the original mesh.

However, some issues remain unclear about some methods described in the paper. The author describes a method for mesh optimization called edge collapse. It is unclear to me how this method could be used to optimize a mesh unless all the vertices of the mesh lie in the same plane. Unless this is true, there would be considerable loss of the geometry of the mesh especially at coarse resolutions. This point has not been discussed in the paper.

Apart from this, it is clear from the paper that the progressive mesh representation is an extremely useful representation for rendering of extremely complex objects. It would also be interesting to consider whether this method could be extended to arbitrary polygonal meshes, instead of just triangle meshes. Then again, this extension may not be necessary if all meshes used for complex objects are predominantly triangular.

2) J. Cohen, A. Varshney, D. Manocha, G. Turk, H. Weber, P. Agarwal, F. Brooks, and W. Wright. Simplification Envelopes. In Computer Graphics Proceedings, ACM SIGGRAPH, pages 119--128, 1996.

This paper proposes an alternate method for computing different LOD meshes, called simplification envelopes. The approach used here guarantees that all points of an approximation are within a distance eps of the original model, where eps can be arbitrarily small and defined by the user. Two different variations are considered here: local and global. Local methods are faster, especially for huge meshes, but global methods avoid the possibility of getting stuck in local minima, the bane of any optimization algorithm. An important advantage of this method is that the simplification process is now automated.


Stan Sclaroff
Created: Jan 21, 1997
Last Modified: Jan 30, 1997