Thursday, June 26, 2008

Trimmed NURBS Surfaces

I recently posted about some of the NURBS intersection methods I am working on.  Of course these all depend on having robust NURBS curves and surfaces.  The third leg of this stool is trimmed NURBS surfaces.  For those that don't know what these are, here is a quick backgrounder...

Take a NURBS surface and project an arbitrarily shaped closed curve profile onto the surface.  Since the profile was closed it should divide the surface into inner and outer sections.  This primary profile defines the outer edge of the trimmed surface.  Everything outside of it is "removed" from the NURBS surface.  Everything inside of it remains.

Now add additional closed profiles inside of the primary profile.  You can use this to "punch holes" into the surface.  None of these interior profiles should overlap or touch either themselves or the exterior profile.  By combining the exterior profile and some number of interior profiles you can trim the surface into just about any shape.

So how do we store, render, and evaluate such objects?  Storing them is simple.  Just capture the underlying NURBS surface (control points, knot points, degrees, etc.) and capture the exterior and interior profiles.  If you look at trimmed_nurbs_surface.h you can see this approach in action.

Here are the minimal steps I feel are required to accurately generate a trimmed NURBS surface - I will go into detail on each step:
  1. Generate underlying NURBS surface points and store in VBO
  2. Evaluate profile curves and store each profile in separate VBO
  3. Using point-inversion, project each point from each profile onto the NURBS surface
  4. Tesselate each profile separately
  5. Render all profiles into a single "trim" texture
Now how about generating the underlying NURBS surface?  Here are a few really good papers on using the GPU to work with NURBS surfaces:
  1. GPU-based Trimming and Tessellation of NURBS and T-Spline Surfaces
  2. GPU-based Appearance Preserving Trimmed NURBS Rendering
  3. Direct Evaluation of NURBS Curves and Surfaces on the GPU
  4. Performing Efficient NURBS Modeling Operations on the GPU
  5. Fragment-based Evaluation of Non-Uniform B-Spline Surfaces on GPUs
(Links go to PDFs where I could find them, otherwise you can get author and paper information from the links)

All of these papers go about rendering NURBS curves and surfaces in pretty much the same way, using the GPU.  Control point arrays and knot point arrays are converted into float textures and passed into a fragment program that calculates the exact surface position and normal.  These are passed out into two separate textures that are then converted into VBOs.  See ns_default_plM.fsh in the Wildcat SVN code to see how the fragment program works.  My version of this works in a single pass and is quite flexible.  The end result is four VBOs
  • Vertex data - X, Y, Z position for each vertex
  • Normal data - Normal vector for each vertex
  • Texture coordinate data - parametric [u,v] values for each vertex
  • Index data - vertex ordering for each triangle in the surface
Next up is evaluating each curve in a profile and building an array for each profile.  Curves are evaluated using the same method as surfaces (see above).  Instead of generating four VBOs, curves only need one - vertex position (curves don't need normals, tex-coords, or indices).  All of the curve point data for the entire profile is store in one VBO in a clock-wise ordering.  This ordering is important to remember!

Third step is projecting each point onto the surface.  You have to do this because a profile curve may not lie directly on the surface.  Plus we want to get each point from 3D "real-world" space to 2D "parametric" space.  Meaning, each point must be located in the [u,v] parametric space of the NURBS surface.  This is important because when we render the trim profiles we render them into a texture that goes from [0,1] in both the u and v directions.  Make sense?  Ok, so to do this we again use a fragment shader with access to textures containing all of the NURBS surface control points and knot points.  The shader takes a single point input (the profile curve point) and outputs the point-inverse into another texture.  This texture is then converted into a VBO.  Now we have a VBO for all points in a profile that are all in [u,v] space.  Paper 4, section 4.2 goes into a little more detail about this step.

Fourth step is tessellating the profile (also called polygon triangulation).  Why do we have to tessellate?  Fundamentally each profile is a closed regular polygon, but it may be either concave or convex.  If every profile were convex no tessellation would be necessary, but in order to handle concave profiles we must tessellate.  I have not found a good parallelized (or GPU-based) tessellation routine.  For now I am using a CPU-bound version of ear-clipping, but I may move to using Triangle (by Johnathan Shewchuck).  The input to this is the VBO of [u,v] points.  The output is an index for triangle ordering.  There really should be a good way to do this on the GPU, just haven't gotten there yet.

The last step is rendering all of these profiles into a trimming texture.  This process is very simple.  I set up a FBO that covers the [0,1] space for both the U and V axis.  The FBO is cleared to be all zeros.  The outer profile is rendered into the FBO (using the tessellation index) filling its internal area with ones.  Each inner profile is then rendered filling their interiors with zeros again.  In the end we have a texture that has ones where there is a surface, and zeros where there is not a surface.

When it comes time to render the trimmed NURBS surface we start just like a regular NURBS surface.  One extra step is added in the fragment shader.  A quick texture lookup is performed into the trim texture to get the value of the texture for the [u,v] of the fragment.  If the value of the texture is one, the fragment is rendered, if the texture value is zero the fragment is discarded.  Simple and easy, right?

This method is very high-performance.  With the exception of the tessellation step the entire process runs on the GPU in just four passes.  I have run tests where >6 million vertices are evaluated and trimmed in a second.  Not too bad.  If adaptive LOD scaling is added, this should be the final approach needed to make Wildcat very very fast.

So where are we at?  Steps 1, 2, 4, and 5 are pretty much all done.  I will be spending the next couple of days working on finishing the GPU version of step 3 (point-inversion) and cleaning up the code to support LOD.  Hopefully by early next week trimmed surfaces will be back.

If you have some insight into how to either avoid the tessellation step or how to parallelize it on the GPU please let me know.  This would make a big difference.

Cheers,
   Graham

1 comment:

javier said...

Very interesting, just the topic I am dealing with.
Javier