Monday, June 30, 2008

Trim Texture Gen Working On GPU

I spent some time today getting point inversion working on the GPU.  Once I had the algorithm together on the CPU porting it to the GPU was straightforward.  It runs pretty nicely, but is not as completely smooth as I would like.  If you are rapidly zooming in and out you can see a little stall in the zoom while the trim textures regenerate.  I think that this will begin to improve once the curve/surface generation cache scheme is in place, but that is a ways away.  I will continue to tweek until then.

So, now the only portion of trimmed surface generation that is not on the GPU is the triangulation of the inverted trimming curve points.  I am using a poorly implemented O(n^2) algorithm (ear clipping), so at some point I want to put in the effort to clean it up.  But again, that is a ways away.

With trim surfaces back working (and better than ever) Pad is completely put back together again.  We are still experiencing some problems when triangulating complex profiles, but I have a sneaking suspicion that this is a problem with arc generation but I am known to be wrong.  I will probably spend some time over the rest of this week seeing what I can dig up.  Shafts (rotated profiles) are still very broken and will need a good amount of attention to fix up.

What's next?  There are two streams of work I want to attack next.  First is implementing the topology model for Pad.  This will allow me to begin thinking about what will be needed for the boolean operations on solids.  Second is finishing up some more of the intersection routines.  These will also be needed for boolean operations.  In case you can not tell, boolean operations are the next major piece of functionality that I want to tackle.  There is just a lot of ground work that has to be done first.  Once BO's are working well then all sorts of interesting functionality can be explored.

I realized that I have not included any pictures recently.  So here are a couple to view.  The first is just a simple Pad.  The second is the results of zooming way way in on a non axis-aligned trim surface.  You can just begin to see the visual issues that come with using the trim texture approach.  Thank goodness for auto-LOD scaling to make things a bit better.




Friday, June 27, 2008

Quick Update

After yesterday's monster post I thought I would take today off.  But as luck has it I was able to finish up a few good chunks of code that got Trimmed NURBS Surfaces back into mostly working order.  If you check out the code you should be able to create nice 3D Pads.  There are a couple of gotchas though.

First, much of the trimming process described yesterday as being on the GPU is still on the CPU.  I want to make sure I have the general algorithms correct before I put it onto the GPU, mostly because it is much harder to debug on the graphics card.

Second, LOD scaling is still a bit wonky.  Zoom way in and the trim texture gets denser, as it should.  But zoom way out and you typically only get down to 50x50 or so.  This is because I am using a method similar to the LOD for regular surface vertices.  I will be able to adjust this over time to optimize the amount of memory that the trim textures soak up.  Right now they eat a lot.

Next week I hope to move more onto the GPU and to optimize the LOD stuff, but for now trimmed surfaces are back.

Have a great weekend.
Cheers,
   Graham

Thursday, June 26, 2008

Trimmed NURBS Surfaces

I recently posted about some of the NURBS intersection methods I am working on.  Of course these all depend on having robust NURBS curves and surfaces.  The third leg of this stool is trimmed NURBS surfaces.  For those that don't know what these are, here is a quick backgrounder...

Take a NURBS surface and project an arbitrarily shaped closed curve profile onto the surface.  Since the profile was closed it should divide the surface into inner and outer sections.  This primary profile defines the outer edge of the trimmed surface.  Everything outside of it is "removed" from the NURBS surface.  Everything inside of it remains.

Now add additional closed profiles inside of the primary profile.  You can use this to "punch holes" into the surface.  None of these interior profiles should overlap or touch either themselves or the exterior profile.  By combining the exterior profile and some number of interior profiles you can trim the surface into just about any shape.

So how do we store, render, and evaluate such objects?  Storing them is simple.  Just capture the underlying NURBS surface (control points, knot points, degrees, etc.) and capture the exterior and interior profiles.  If you look at trimmed_nurbs_surface.h you can see this approach in action.

Here are the minimal steps I feel are required to accurately generate a trimmed NURBS surface - I will go into detail on each step:
  1. Generate underlying NURBS surface points and store in VBO
  2. Evaluate profile curves and store each profile in separate VBO
  3. Using point-inversion, project each point from each profile onto the NURBS surface
  4. Tesselate each profile separately
  5. Render all profiles into a single "trim" texture
Now how about generating the underlying NURBS surface?  Here are a few really good papers on using the GPU to work with NURBS surfaces:
  1. GPU-based Trimming and Tessellation of NURBS and T-Spline Surfaces
  2. GPU-based Appearance Preserving Trimmed NURBS Rendering
  3. Direct Evaluation of NURBS Curves and Surfaces on the GPU
  4. Performing Efficient NURBS Modeling Operations on the GPU
  5. Fragment-based Evaluation of Non-Uniform B-Spline Surfaces on GPUs
(Links go to PDFs where I could find them, otherwise you can get author and paper information from the links)

All of these papers go about rendering NURBS curves and surfaces in pretty much the same way, using the GPU.  Control point arrays and knot point arrays are converted into float textures and passed into a fragment program that calculates the exact surface position and normal.  These are passed out into two separate textures that are then converted into VBOs.  See ns_default_plM.fsh in the Wildcat SVN code to see how the fragment program works.  My version of this works in a single pass and is quite flexible.  The end result is four VBOs
  • Vertex data - X, Y, Z position for each vertex
  • Normal data - Normal vector for each vertex
  • Texture coordinate data - parametric [u,v] values for each vertex
  • Index data - vertex ordering for each triangle in the surface
Next up is evaluating each curve in a profile and building an array for each profile.  Curves are evaluated using the same method as surfaces (see above).  Instead of generating four VBOs, curves only need one - vertex position (curves don't need normals, tex-coords, or indices).  All of the curve point data for the entire profile is store in one VBO in a clock-wise ordering.  This ordering is important to remember!

Third step is projecting each point onto the surface.  You have to do this because a profile curve may not lie directly on the surface.  Plus we want to get each point from 3D "real-world" space to 2D "parametric" space.  Meaning, each point must be located in the [u,v] parametric space of the NURBS surface.  This is important because when we render the trim profiles we render them into a texture that goes from [0,1] in both the u and v directions.  Make sense?  Ok, so to do this we again use a fragment shader with access to textures containing all of the NURBS surface control points and knot points.  The shader takes a single point input (the profile curve point) and outputs the point-inverse into another texture.  This texture is then converted into a VBO.  Now we have a VBO for all points in a profile that are all in [u,v] space.  Paper 4, section 4.2 goes into a little more detail about this step.

Fourth step is tessellating the profile (also called polygon triangulation).  Why do we have to tessellate?  Fundamentally each profile is a closed regular polygon, but it may be either concave or convex.  If every profile were convex no tessellation would be necessary, but in order to handle concave profiles we must tessellate.  I have not found a good parallelized (or GPU-based) tessellation routine.  For now I am using a CPU-bound version of ear-clipping, but I may move to using Triangle (by Johnathan Shewchuck).  The input to this is the VBO of [u,v] points.  The output is an index for triangle ordering.  There really should be a good way to do this on the GPU, just haven't gotten there yet.

The last step is rendering all of these profiles into a trimming texture.  This process is very simple.  I set up a FBO that covers the [0,1] space for both the U and V axis.  The FBO is cleared to be all zeros.  The outer profile is rendered into the FBO (using the tessellation index) filling its internal area with ones.  Each inner profile is then rendered filling their interiors with zeros again.  In the end we have a texture that has ones where there is a surface, and zeros where there is not a surface.

When it comes time to render the trimmed NURBS surface we start just like a regular NURBS surface.  One extra step is added in the fragment shader.  A quick texture lookup is performed into the trim texture to get the value of the texture for the [u,v] of the fragment.  If the value of the texture is one, the fragment is rendered, if the texture value is zero the fragment is discarded.  Simple and easy, right?

This method is very high-performance.  With the exception of the tessellation step the entire process runs on the GPU in just four passes.  I have run tests where >6 million vertices are evaluated and trimmed in a second.  Not too bad.  If adaptive LOD scaling is added, this should be the final approach needed to make Wildcat very very fast.

So where are we at?  Steps 1, 2, 4, and 5 are pretty much all done.  I will be spending the next couple of days working on finishing the GPU version of step 3 (point-inversion) and cleaning up the code to support LOD.  Hopefully by early next week trimmed surfaces will be back.

If you have some insight into how to either avoid the tessellation step or how to parallelize it on the GPU please let me know.  This would make a big difference.

Cheers,
   Graham

Tuesday, June 24, 2008

Platforms and Installers

Last night I decided for some crazy reason that I wanted to get the PPC/Tiger version up and running.  So I spent a couple of hours and figured out three things.  First, the Accelerate framework for Mac makes major use of the SSE2 instruction set.  So I had to work around this with a couple of better crafted #ifdefs.  Second, the Tiger version of glext.h is missing two of the extensions I make use of for some of the high-end generation routines (GL_EXT_transform_feedback and GL_EXT_geometry_shader4).  Again, a couple of good #ifdefs seem to have taken care of this.  Finally, there must have been a couple of 32-bit to 64-bit changes going from Tiger to Leopard that I was not aware of since I still have a few issues left with the GUI calls.  I hope to clear this up tonight and get everything compiling for 32-bit and PPC and Tiger.  Weeeee!

Also, I took a look at InnoSetup for creating a good Windows installer.  What a massive improvement over deployment projects within Visual Studio.  Way way way way way way better.  I have an installer mostly done and just need to wrap up two things.  First, I need a good stable version of Wildcat that is worthy of a binary distribution.  Second, I need to determine exactly what .dll's need to be distributed with the app.  Mostly these will be VS run-time files.  I should have this done later this week.

Tomorrow I hope to finish up with CCI and CLI and begin moving on to SLI, SCI, and SSI.  I also need to fix up trim surface generation.  That is still very broken.  Blah.

Cheers,
   Graham

Monday, June 23, 2008

GPU Curve-Curve Intersection

I was traveling this weekend so I printed out a couple of recent conference papers to read while in the airports.  The SolidModeling annual conference was just held in early June.  Usually it has some very interesting results for those of us that dabble in solid modeling.  I came across a paper that was an extension of something I saw last year.

Sara McMains' research group out of UCB has been doing some great work on GPU-based NURBS generation and manipulation.  My research last summer that culminated in the genesis of Wildcat parallels much of what her group published in "Direct Evaluation of NURBS curves and surfaces on the GPU."  The approach you see in Wildcat for using the GPU to generate NURBS curves and surfaces is very similar to hers.

This year her group has followed that paper with "Performing Efficient NURBS Modeling Operations on the GPU."  I will let you read it because it is pretty good.  They tend to use too many passes on the GPU while I consolidate down to one or maybe two passes, but I like their approach.  I have not reviewed it in detail, but their stream-reduction algorithm seems very promising.

So, today I took some time and reworked my curve-curve intersection (CCI) algorithm to use the GPU akin to what you see in the McMains paper.  Overall it seems to work really well.  We are probably getting a 20-50x improvement in performance.  Not too bad.  There are still a couple of details to clean up, but this should be the way of the future for Wildcat.

Also, I got some good messages over the weekend about a broken Windows build.  I cleaned up the VS project (and moved all of the code to a VC9 project).  So you Windows folks, please try again.  Tomorrow I am going to take a shot at surface-surface intersection.  Should be pretty easy since I can pattern off of CCI.

Cheers,
   Graham

Thursday, June 19, 2008

PartPad is back from the dead - almost

When I set about restructuring the 3D feature code I quickly killed the Pad and Shaft features.  They both died in the aftermath of auto-LOD and topological-correctness.  Between the checkins I made yesterday and those from today, I am happy to say that Pad is back!

Here is a quick review of the primary changes:
  1.  Instead of having one surface for the top of the pad regardless of the number of separate bodies there should be one top and one bottom per body.  This is almost done.  I am up to the last step of where the actual surfaces are generated.  All of the data necessary is now in place.  Should be done tomorrow.
  2. Topology model generation wasn't even a thought in the old version.  Now as the extrusion points, curves, and surfaces are being generated they are recorded in way to greatly ease the topology model creation.  Again, I am just up to the point of creating the topology model.  I am going to spend tonight figuring out the algorithm details and tomorrow hope to implement and do some testing.
  3. Points!  An eagle-eyed user would have noticed that no points were ever generated with a Pad.  I only had curves and surfaces.  Now points are there and are correct.  Yahoo.
  4. Initial code is in place to eventually support pads that go "UpTo".  Here is the list of types I want to be able to support: Dimension, UpToNext, UpToLast, UpToPlane, UpToSurface.  Right now only Dimension works, but we shouldn't have to change the object interface much in the future.
Lastly, the code is much more efficient and logical now.  When I first wrote it I just wanted to make sure that it worked.  Now it not only works but works well.  I hope to finish up all of the little details tomorrow.  Next week I want to get shaft back working.

Once these two are back I can move on to boolean operations with the solids.  That will be the crazy cool stuff.

Cheers,
   Graham

Tuesday, June 17, 2008

Topology Models

Before I get Pad and Shaft back working I want to finish the initial topology implementation.  If I am going to spend a lot of time reworking the 3D features I want to make sure I can incorporate full topology support.  What is topology support you ask?  Great question...

You see, geometry is only half the picture.  Geometry itself doesn't understand the relationships between various curves and surfaces.  Take a cube for example.  The geometry of a face from the cube doesn't know (or need to know) all of the edges that bound it.  Instead there is a co-model that just captures these adjacency relationships.  This information becomes very useful (and is pretty necessary) for operations like Union, Subtract, Intersect, and Slice (these the the boolean operations).

I am using a topology model called the Radial Edge data structure.  It was developed by Kevin Weiler as part of his Ph.D. research.  The best source of information about it is in his dissertation, "Topological Structures for Geometric Modeling."  The radial edge structure is a variant of the venerable Winged Edge structure described by Bruce Baumgart.  The biggest improvement is the ability of the radial edge structure to model non-manifold topology (NMT).

Solid models should be 2-manifold.  This means that an edge is bordered by exactly 2 surfaces.  This means that all objects being represented are solids.  There would be no lines, points, or surfaces that are not part of a solid.  This isn't exactly what we want.  Take sheet metal for example.  Most CAD packages represent sheet metal with just a surface, there is no thickness.  Our topology model needs to be able to handle this.  So that is why we need NMT.

Anyway, you can find a good bit of info about NMT and 2-manifolds on the web.  Most of the solid modeling kernels are able to handle NMT.  I am going to spent the next couple of days getting the data structures into place.  It will take way way longer to get the boolean operations working.  But with data structures available I can then patch up some of the 3D features (pad especially).

Cheers,
   Graham

P.S.  I really want to get to a useable version of the software before I create a Windows installer, so it could be a few days (or a week or two).  If you want to try out the app, you can download MS Visual Studio 2008 (there is a free version) and compile the code yourself.  Sorry for asking this of you.

Friday, June 13, 2008

Development Update

It has been a few days since my last progress update and there has indeed been progress.  Automatic level of detail scaling is pretty much done for NURBS curves and surfaces.  Trimmed NURBS surfaces are a whole different matter.  Let's talk about the first two...

Each NURBS curve and surface maintains lists of vectors for control point and knot points.  Using this data there are at least three methods of generating buffers of vertex data.  I call them "High", "Medium", and "Low".  This refers to the theoretical performance of each path.  The high path makes use of very recent GPU enhancements.  The medium path can be used on GPUs that may be a few years old.  The low path is purely CPU bound.  There are also one or two specialty routines optimized for certain conditions (think surfaces with only four control points and such).

Each of these paths can now place their generated vertex output into either server side (on GPU memory) or client side (on CPU memory) buffers depending on which is requested.  This is done as a "service", in that vertex buffers can be generated for anything, not just rendering.  For example, the current curve-curve intersection algorithm generates a buffer of vertex data for each curve and then uses this data to test for intersection, all on the CPU.

In order to get LOD scaling I needed to re-implement the way that these buffers were being generated and maintained.  Previously each curve or surface could only manage generating one buffer which then had to be reused for both rendering and intersection and whatever else.  Now buffer management is up to whatever asked for the buffer to be generated and memory capacity is the only limit.

I was already passing in the current camera zoom factor into all rendering methods but was not making use of it.  Now the zoom factor is used to determine an optimal level-of-detail.  If this LOD is close to the LOD of the render buffer nothing happens.  If the ratio of optimal-LOD to current-LOD gets too far off then a new render buffer is generated.  Overall a pretty good approach I think.

At some point I want to implement a vertex buffer caching scheme.  Right now if the rendering routine generates a buffer and then an intersection routine also generates a buffer with the same LOD then two identical buffers get generated.  It would be great to better coordinate between them somehow

When it comes to intersection routines my previous estimates of expected code completion were way way off.  I pretty much ended up scrapping all of the prior code and started over.  There was good reason for this but it required a lot more work.  I now have 6 of the 15 routines nearly done.  The remaining 9 are all of the ones for NURBS surfaces and trimmed NURBS surfaces.  I have an approach in mind for these, it will just take time.

While doing all this work I noticed how incredibly crappy my implementation of trimmed NURBS surfaces was.  It worked, just not all that well.  I have decided to rewrite most of it.  Right now it is mostly trashed.  This also led me to look at the code for Pad and Shaft (the two 3D features I had implemented).  They were pretty weak also, mostly just proof-of-concept code.  I have completely gutted them and it will take a couple of weeks to get them back in shape.  As a result the code I will check in this weekend will fix all of the stuff from above, but will kill off any 3D stuff for now.  It all compiles but will SEGFAULT quickly upon use.  Progress!?!?

Cheers,
   Graham

Thursday, June 12, 2008

Why not wxWidgets or Qt

I have had a number of people ask me about why I have chosen to not use a cross-platform GUI toolkit like wxWidgets.  There are several reasons for this and so I wanted to explain my reasoning.  There are a bunch of very good choices for GUI toolkits.  I spent a good amount of time evaluating a selection of them: GTK+, Qt, wxWidgets, Fox, and others.  I decided that I wanted to try something a bit different for two primary reasons.

As I see it, there are four primary GUI components that all contribute to the Wildcat user experience:
  1. Primary rendering window
  2. Toolbars
  3. Menus (both context and primary)
  4. Dialog windows
The primary render window is pretty straightforward to implement and get right.  I have it mostly up and running on Mac and Windows, and GTK+ for Linux should not be difficult.  The primary render window won't change much over time.  It renders using OpenGL.  It's a window.

Toolbars are a more complex issue.  What is a toolbar?  A toolbar looks really different on Mac vs. Windows.  Toolbars need to be very flexible and change constantly depending on the tasks the users is executing.  They need to be dynamic and configurable.  I have a first shot at the Mac version, and hope to incorporate the Windows ribbon interface down the road.

Menus are similar to toolbars.  They change a lot, very dynamic.  But I don't feel they are as complex as toolbars since they tend to not be user configurable.  I have not done much with them yet on either Mac or Windows.  In time...

Lastly, dialog windows.  For CAD applications the dialog windows seem to define a lot of the user experience, but they are slowly giving way to a better mouse interaction models.  I want traditional dialog boxes, but not in a traditional way.  I want them to be much more customizable.  Read on to see where I am going with this.

So, back to the two primary reason I have decided not to go with a cross-platform toolkit.  Here we go...

Reason 1: Cross-platform does not mean platform specific

As I mentioned above toolbars on Mac are a very different concept than toolbars on Windows.  On Windows they are strips of icons that typically dock to a portion of a window.  The OS X interface is moving towards a "pallet" concept.  The toolbars are placed together into a separate tabbed window that sits to the side of the primary window.  Can you imagine the Windows ribbon on a Mac.  Not going to happen.

For another example look at multi-document interface applications like MS Word.  If you open a bunch of Word docs on Windows you get one window with many child windows within it.  On Mac you get multiple windows.  Same concept, very different approach.

My feeling is that as users approach an application for the first time they tend to have all of the memes of the OS in mind and are expecting a similar experience.  That is what makes a Mac a Mac.  This philosophy is much stronger on Mac than Windows.  MS has taken a hands-off approach and you tend to see a wider range of look-and-feel on Windows.

When I looked at the cross-platform toolkits they all seemed to make your application look the same regardless of what OS it ran on.  In my opinion it should be the other way, your app should look like the OS it is running on.  While this may increase developer effort, I feel that it will pay for itself with a better user experience.


Reason 2: I want traditional dialog window, but not in a traditional way

In the intro to this post I spoke about dialog window being fundamental to the CAD user experience.  If Wildcat doesn't use a cross-platform toolkit doesn't that mean every dialog window will have to be re-coded for each platform?  The traditional answer would be yes.  I think that I may have a way around this...

I have embedded the open source WebKit HTML rendering engine into Wildcat.  All Wildcat dialog windows are actually simple platform windows with a web browser pointing to a local file.  Here are the advantages to this approach:
  • Cross-platform solution - WebKit runs on Mac, Win32, and has a GTK+ port
  • High-performance JavaScript engine for free - now we have a scripting engine for Wildcat operations too
  • Change the CSS, change the look-and-feel - this makes it possible to easily "skin" Wildcat to provide a new or different look
  • HTML is really easy to program - Who hasn't created a web page?
  • Dynamic generation - why not generate HTML on the fly?  Crazy idea, I know.
OK you say...interesting approach.  How do you tie the dialog window actions to the core app and visa-verse?  WebKit has JavaScript hooks for all pages it renders.  I created similar hooks in Wildcat that allow data and events to be passed back and forth.

There are slight performance implications to this approach, but they are really minor and the user should never even know what is happening.  It should make it very easy to expand and change Wildcat.  There may still be hurdles I have not foreseen, but as of now it seems to be working quite well.

I have only implemented this for the Mac.  If you are using a Mac when you first start Wildact you are asked what type of document you want to work on.  That window is just a web browser pointing to docTypeSelector.html.  Give it a try.  Kind of fun.

Let me know what you think.
Cheers,
   Graham

Tuesday, June 10, 2008

Great Response

As many of you know I sent an article to upFront.eZine about Wildcat.  I wanted to introduce more people to the application and start to build an audience.  I am really overwhelmed by the response.  Throughout today I have been getting a great stream of emails from around the world asking about Wildcat.  Wow!!!  Thanks Ralph.

A lot of people are trying out Wildcat.  There are some mixed results.  Most of the users appear to be Windows folks, so they are having to compile the code themselves.  I hope to have a Windows binary installer out soon so please bear with me.  If you are having problems please post your question to the Wildcat mailing list here.  I am getting back to everyone pretty quickly.

I also wanted to let you know about the progress I have been making over the last few days.  I got automatic level-of-detail working for all curves and am very close to completing surfaces.  I also completely rewrote the chunk of code that handles all types of geometric intersection but it still has a couple more days worth of work to go.

I made the mistake of checking in part of this code yesterday so if you download from the SVN you may get some pretty funky behavior when trying to use either Pad or Shaft.  This should be fixed as soon as I get the intersection code working again.

Thanks for all the great feedback.  And please let me know if you want to help out, especially on getting the Windows port up to speed.  My MFC/Win32 skills are complete garbage.

Cheers,
   Graham

Saturday, June 7, 2008

Tasks Update

Yesterday I outlined my top development priorities.  Being a hot and lazy Saturday here in Nashville, not much is getting done, but I thought that I would update you on my progress yesterday.  I was able to get the majority of the groundwork done for all of the various geometric intersection routines.  With 5 types of geometry (point, line, NurbsCurve, NurbsSurface, and TrimmedNurbsSurface), there are 25 possible ordered intersections.  That is easily reduced to 15 types when order is not a factor.  Of those 15, I pretty much wrapped up 3 of them.  But I also outlined the other 12.  Now I have to get level-of-detail stuff working and then I can wrap up intersections.

For LOD I was able to layout nearly all of what needs to be done.  It should take me only a couple of days this next week to implement and test.  So, I hope that by the end of next week you all will get to start experiencing some of the new code.

Have a great weekend.  Cheers,
   Graham

Friday, June 6, 2008

What I'm Working On

Today I will layout a bit of the items I am working on.  Since starting up this blog and the code site, I haven't been able to spend quite as much time coding as I would have liked, but it has been fruitful to begin getting the application into others' hands.  I am still working through a couple of deployment bugs (most notably a .dll dependency issue for the Windows version), but hopefully I will have solid versions and installers for both platforms soon.

Also, I have had someone express an interest in starting some of the work for porting Wildcat to Linux and GTK+.  This should be really straightforward since Mac already uses GCC 4.2.  The GTK+ GUI work will be the main hurdle.  For the time being, the Mac version will continue to lead the pack for GUI and overall functionality.

Ok, here are my top development priorities right now:

1)  Automatic level-of-detail scaling for all geometry - this means that as you zoom in and out the level of detail of each piece of geometry will react accordingly.  Right now geometry is generated once and you are stuck with it.  Zoom way in on a circle and it starts to get chunky.  Also, this will enable out-of-view culling.  Meaning that if a portion of the geometry is not visible on the screen, it just won't be drawn.  A nice optimization as we try to move towards more complex geometry and assemblies.

2)  Topologically correct Pad and Shaft operations - right now if you sketch two circles that don't overlap or touch and extrude them, the kernel will treat them as one surface.  Obviously this is not correct, they should be two surfaces.  This was done originally as an optimization, but now I see the folly of my ways.  All of the infrastructure needed for this is now in place, I just have to spend a lot of time massaging the relatively complex pad and shaft operations to be smarter.

3)  Manifold/Non-Manifold topology representation.  All solid geometry should be 2-manifold.  This just means that it is a valid solid.  This is necessary to provide boolean operations (add, subtract, and union).  Most of the data structure work is in place, now I just need to clean that up and add topological operations to all 3D features (pad and shaft right now).  Strangely, this work is completely separate from #2 above.  For more detail about this topic read "Topological Structures for Geometric Modeling" by Kevin Weiler.  This is a great dissertation about non-manifold topology and boolean operations.

4)  Geometric intersections.  There are four basic types of geometry in Wildcat: point, line, curve, and surface.  I started with no line primitive, but there are way too many optimizations for lines so they got added as a separate class.  All curves and surfaces are 64-bit NURBS representations.  This provides for extremely accurate geometry.  With 5 classes there are 15 possible types of intersection, here is their status:
  • Point-Point: 100% done
  • Point-Line: 90% done
  • Point-Curve: 75% done - need to clean up point-to-curve projection algorithm
  • Point-Surface: 50% done - need to clean up point-to-surface projection algorithm
  • Point-TrimSurface: 0% done
  • Line-Line: 75% done - need to handle overlap case, not just single-point intersection
  • Line-Curve: 75% done - need to handle overlap case, not just point intersections
  • Line-Surface: 0% done
  • Line-TrimSurface: 0% done
  • Curve-Curve: 75% done - need to handle overlap case, not just point intersections
  • Curve-Surface: 0% done
  • Curve-TrimSurface: 0% done
  • Surface-Surface: 0% done
  • Surface-TrimSurface: 0% done
  • TrimSurface-TrimSurface: 0% done
As you can see, anything Surface/TrimSurface related needs work.  This is a very complex space with lots of research having been done.  There are a ton of different Surface-Surface-Intersection (SSI) algorithms out there.  I am currently cleaning up some of the general intersection infrastructure and then plan on taking a shot at SSI.

5)  Boolean operations.  These functions will allow for realistic and complex 3D parts to be created.  They are absolutely fundamental to getting the Wildcat kernel where it needs to be.  They are also wildly complex and rely upon nearly all of the above work.  This is the #1 item I want to get working, but I have a ways to go.

6)  Geometric constraint solving.  2D sketches can be created and have basic constraints added to them, but right now these constraints do nothing but look good.  There is really only one GCM engine in commercial use (by D-Cubed - a division of UGS), and a handful of academic ones.  I have spent a great deal of time looking into this, and I think that with a few months of good work I could get one working.  This is important but a lower priority than Boolean Operations.

Wow, lots to do.  When these items are in place it should be possible to quickly add the tools to build nearly any type of 3D part.  This is when I see Wildcat really starting to take off.  I have no idea how long it will take but summers do tend to be my productive time.  This being said, I would really love to get help.  If you want to take a shot at any of these items, please feel free to contact me.

Cheers,
   Graham

Wednesday, June 4, 2008

System Requirements

I have been getting feedback from a few users and this is proving very useful.  I have not previously published what the system requirements are for running Wildcat, so here goes:

For Windows Systems
  • Windows XP
  • Pentium 4 class processor or better (1.5GHz+ preferred)
  • At least 50MB of disk space
  • At least 1GB of RAM
  • OpenGL 2.1 capable video card with 128MB of RAM
Note: The graphics card requirement is the most important.  All aspects of Wildcat presume that you have a relatively recent video card and up-to-date OpenGL drivers.

For Mac OS X
  • OS X 10.5 Leopard (Tiger may work, but I have not tried)
  • Intel 32/64-bit processor
  • At least 50MB of disk space
  • At least 1GB of RAM
  • OpenGL 2.1 capable video card with 128MB of RAM
Note: Apple provides a software fallback path for machines that don't have compatible video cards.  While I have not tested this, it should at least make it so that all Intel Macs can run Wildcat.

My Development and Testing Setup

Just to help folks out on what I am working with and to give you an idea of what should work here are the two systems I use on a daily basis:

Primary Development -
Apple MacBook Pro
OS X 10.5.3
2.4 GHz Intel Core 2 Duo processor
2GB RAM
40GB HD partition
nVidia GeForce 8600M GT w/ 256MB video RAM
XCode 3.1 development environment

Windows Development -
MS Windows XP SP 2
2.2 GHZ Intel Core 2 Duo processor
2GB RAM
160GB HD partition
ATI Radeon HD 2400 XT w/258MB video RAM
MS Visual Studio 2005 development environment

For just running Wildcat you should not need Visual Studio or XCode.  They are necessary only if you are planning to compile Wildcat yourself.  I am going to post this content into the Wiki also.  Please let me know if you have any questions.

Cheers,
   Graham

Setting Expectations

I want to set some expectations about Wildcat.  People are starting to look at the application and are starting to use it, all of which makes me very excited.  But this also means that people are pushing it in ways that I have not.  Again this is a good thing.

As of right now Wildcat is not capable of replacing any existing CAD application.  There are just too many gaps in its functionality.  What I hope it can do it show what is possible.  I am continuing to work on it daily, but one person does not a CAD app make.

In an effort to properly set expectations here is a quick list of major things that Wildcat currently can NOT do:
  • No boolean operations (add, subtraction, union) of solids
  • No geometric constraint solving
  • No part assemblies
  • No drafting
  • No dress up features (fillets, drafts, chamfers, etc.)
  • No changing the name of any feature
  • No setting the color of any feature
  • No importing or exporting on non-native file format
  • No analysis (FEA, CFD, etc.)
  • No a lot of other things
I have been concentrating on getting basic infrastructure in place and getting good 2D sketching and basic 3D part features (extrude and revolve) working.  I want to work on all of the list above, it will just take time.

Over the last year Wildcat has evolved substantially.  There is still a really really really long way for it to go.  I would love and appreciate help and I have tried to make the code approachable and understandable.  If you would like to jump in and help turn Wildcat into the application we all want it to be, please contact me at graham.hemingway@gmail.com.

Cheers,
   Graham

Tuesday, June 3, 2008

Application Installers - Part Deux

There should now be automated installers for both Windows and Mac posted.  They should both work well, but let me know if you have any issues.  Due to platform differences the two installers do quite different things.

Windows

Since Windows does not have a "bundle" in the Mac sense I am just locating everything inside one simple directory.  The installer should create \Wildcat wherever the app is installer (into Programs Files by default).  Inside \Wildcat there should be six file and one sub-folder.  The files are the primary executable (Wildcat.exe) and the five dependency libraries.  The sub-folder should be named Resources.  It contains all of the .tiff image, configuration XML files, and OpenGL shaders (.vsh, .gsh, .fsh).  With this, you should be good to go.

Mac OS X

On Mac there is a "bundle" concept.  All of the *.app objects in the /Applications directory are not single object, they are actually folders.  Inside these folders are most of the necessary files for the applications.  On Mac the only external dependency library Wildcat requires is xerces.  On Mac the xerces dylib has to be installed into /usr/local/lib.  In addition, Mac is much more specific about the permissions on all of the files.

With all of this in mind the Mac installer should copy Wildcat.app to the selected location (/Applications by default).  It will also copy xerces-c.28.0.dylib into /usr/local/lib and create two symbolic links (xerces-c.dylib and xerces-c.28.dylib) in the same directory.  These both point to the actual library.

Down the road I really need to improve the Mac installer to conditionally install xerces only if it does not already exist.  Have a try with either the Windows or the Mac version.  I welcome feedback.

Cheers,
   Graham

Monday, June 2, 2008

Application Installers

Last week I began posting the source code for Wildcat CAD here.  I also want to provide binaries for both Mac and Windows.  I have never really distributed applications before, so I did not have all the details down.

I found out there there is quite a difference between developing the code and just trying to run it.  There has been a dependencies directory in the source code from the beginning, but I hit a couple of snags in distribution.  On OS X the application would not run and exited with an error about not being able to find the xerces-c library (I use xerces for easy XML parsing and generation).  I opened a bug on this and today set about trying to make a nice installer for Mac.

Mac has some really easy deployment tools included in the developer package.  So I have spent a bit of time working with it, and I think (still in testing), that I now have a basic OS X installer.  I should wrap up testing on the installer by EOD today and will close the bug.  One important point.  Right now the OS X version of Wildcat CAD only supports 10.5 and Intel.  If there is sufficient demand I will see about putting out a Tiger version and maybe a PPC version.

In the grand scheme of things having nice installers is probably pretty low on the functionality list, but I really want to encourage users to try out the software and having a drop-dead easy deployment is a really important part of this.  Maybe I will even work on some code today.  Tomorrow is Windows installer day!

Cheers,
   Graham