GImpact vs convex-decomposed mesh collision performance

User avatar
dphil
Posts: 237
Joined: Tue Jun 29, 2010 10:27 pm

GImpact vs convex-decomposed mesh collision performance

Post by dphil »

Making use of the new HACD convex decomposition library, I am wondering if anyone can shed light on the performance and stability of using a raw mesh (GImpactMeshShape or bvhTriangleMeshShape) vs a convex decomposition of that mesh. I know in the past I have found that collisions with GImpact meshes sometimes seem unstable or strange. I am guessing a compound made up of convex hulls of the decomposed mesh would be more stable, given the presumably more robust convex hull collision algorithms. But then, what about performance? For example, if I break a 3600-triangle mesh into 134 convex hulls and put them in a btCompoundShape, will this still give better performance? Or is there some rough guide as to how many convex subhulls gives equivalent performance to an n-triangle mesh? In the extreme cases, one convhex hull encompassing the mesh would obviously give better performance, and one hull for every single triangle would give worse (I've tried this, accidentally :) ). So somewhere in between must be a point where decomposition loses its benefits...
Mako_energy02
Posts: 171
Joined: Sun Jan 17, 2010 4:47 am

Re: GImpact vs convex-decomposed mesh collision performance

Post by Mako_energy02 »

I have done some performance testing with using convex hulls, convex decomposition into compound shapes, and Gimpact meshes, when I first started to use Bullet. In general, it depends mostly on how many simultaneous collisions you expect to have.

When I started making my first Hello World simulations there was no noticeable performance hit. I had a simple btBvhTriangleMeshShape terrain that was more or less just a flat plane and a Gimpact body made from the Robot mesh provided in the Ogre SDK. But when I added another 9 robots an angled back plane and two massive falling sphere's...the performance was crippling. It was getting <5 FPS, until all the collisions were over and the objects has scattered. The number of vertices seems to be a major factor in this (understandably). Robot mesh is only 297 verticies, and I also tried with another 3000 verticy mesh and was consistently getting <0.5 FPS. I even did a weak profile of bullet where I put a simple timer around the call to StepSimulation(), and that function call alone was taking over a second during collisions. As far as stability goes...the simulation seemed accurate from what I could tell...but I wasn't exactly stress testing stability, so I can't comment too much there.

So next I downgraded a bit to convex decomposition. I haven't used the newer convex decomposition you are talking about, just the old one based on John Ratcliff's code provided in the extras folder. I had two primary settings I had been testing, the more accurate setting of:

Depth = 7
CPercent = 5
PPercent = 10

And the more performant setting of:

Depth = 5
CPercent = 5
PPercent = 15

I have yet to test any other settings. The first one listed, that is more accurate...did provide much better performance. In my generic simulation with 10 robots, 2 planes, and two spheres I was back to getting 50+ FPS. So we kept it at that until I got around to making my gravity well class. Once I had that, everything was getting sucked into the center forcing a constant 20+ collisions between all the objects every frame. This cause my framerates to drop again to between 5 and 10 FPS. So then we used the second set of settings I mentioned above, and it was better(around 25-30 FPS), but still bad(not 60+). We did some more weak profiling of StepSimulation() here, and it ranged from 4500 to 20000 microseconds to execute(best case on the performant setting to the common case on the accurate setting). This was still a simple simulation so I can't comment much on the stability, but everything looked stable in my tests.

So ultimately we ended up using convex hulls and hand crafted(in Blender) compound shapes, which we haven't had any performance issues with, including in the gravity well simulation. StepSimulation() usually takes 250 to 350 microseconds to execute with the simple shapes. Sorry these numbers aren't more concrete or rigorously profiled. Hope this helps shed some insights.

Edit: I should clarify...with the exception of where I specified otherwise with the sphere's and terrain planes...all these tests were done by applying these techniques to the 10 Robot Actors in the simulation.
Flix
Posts: 456
Joined: Tue Dec 25, 2007 1:06 pm

Re: GImpact vs convex-decomposed mesh collision performance

Post by Flix »

I've never really made some reliable performance test myself actually...

Anyway what I usually do when I have a concave mesh is:
1) Use bvhTriangleMeshShape for static background environment (unless I experience some problem with it).
For dynamic meshes:
2) Use convex decomposition compound shapes for dynamic rigid bodies (I can tweak them so that I have more or less child shape made of more or less number of vertices for precision vs speed). [Edit:] To tell the truth, whenever possible, if the mesh is simple enough, I prefer making the compond shape manually, so I can optimize it better...
3) Use GImpactMeshShape when I want to have better collision precision than point 2 (*), when I need to retrieve the triangle index (**) and when I need to deform the mesh at runtime.

(*) Sometimes convex child shapes can be rather small and might not work well with Bullet, or small colliding bodies can get stuck inside them in some cases.
(**) Usually I "clean" the mesh from duplicated vertices and degenerate triangles before feeding a btStridingMeshInterface, but this process "breaks" the (original) triangle indices, so it should be better not to clean the mesh in those cases (or set up some kind of index mapping...).
User avatar
dphil
Posts: 237
Joined: Tue Jun 29, 2010 10:27 pm

Re: GImpact vs convex-decomposed mesh collision performance

Post by dphil »

Thanks for both your inputs.

The concave meshes I use are typically complex anatomical models that have been received from third parties, so custom-made compounds aren't really an option.

From what I can tell, Bullet's old decomposition algorithm (which I'll call ODA here) and the new HACD work in somewhat opposite ways. The old one seemed to create a single wrapping convex hull around a mesh, and iteratively break that down (to the depth specified), whereas HACD seems to work from the bottom up, building clusters starting from individual triangles. As such, ODA is much faster to produce small numbers of low-res convex hulls and more decomposition takes a while, whereas HACD is fast to do a complete decomposition, but slow to compute smaller numbers of aggregated convex hulls (ie to combine clusters). ODA and HACD also provide different controls over the process. For example, HACD doesn't specify a decomposition depth while ODA does, which I kind of miss. HACD works with more general "minimum" and "maximum" constraints (ex. min # clusters), but no hard rules to tell it when to stop as with ODA's decomp depth.

In general, I think I find ODA a bit more intuitive in how it's used and the results it produces, while HACD is more accurate and handles complex cases better. For example, I still haven't figured out how to get HACD to do the simplest level of decomposition possible - ie wrapping the whole mesh in a single convex hull. With ODA I believe that was as simple as setting a decomposition depth of 0. Though perhaps this is just because of the opposite bottom-up/top-down approaches of the two algorithms. Still, one would think that a clustering algorithm would eventually be able to cluster all points together as one. For example, with one test mesh of a fish (not too complex, but with some concave features like fins, eyes and mouth), which had about 900 triangles, regardless of the parameters (I tried all sorts of extreme values) I set I couldn't get HACD to produce less than 20 clusters (the convex hulls, essentially). I mean, the result was nice...but it's almost as if it refused to reduce the resolution below a certain threshold by merging more clusters. Perhaps I am missing something.

Anyway, Flix I find your 3 points to be a good rule of thumb for what to do with concave meshes. One other thing I would add is decomposition for the possibility of internal collision detection. Concave meshes only detect collisions with their surfaces, while their convex pieces after decomposition detect collision inside as well. Of course, I guess this depends on the decomposition settings. Take a mesh tube, for example. At low decomposition depths, the tube would be wrapped in convex hulls, allowing detecting of anything inside it. At high decomp depths/resolution, surface patches of the tube mesh would be turned into convex hulls, leaving the inside of the tube outside of the hulls, and thus not able to detect objects inside the tube.

As for general performance, I guess I'll just have to take it on a case by case basis. If I encounter much of a slow-down with a GImpact, I can try different levels of decomposition and just take whatever's fastest (within reasonable collision accuracy).
khaled
Posts: 22
Joined: Mon May 02, 2011 3:26 pm

Re: GImpact vs convex-decomposed mesh collision performance

Post by khaled »

Thanks for the feedback.
In order to get 1 convex-hull approximation you need to set the concavity parameter to a big value (e.g. 10000) and the # clusters to 1.
Meshes with multiple connected components (CCs) were not well handled by the old version of HACD since it does not merge CCs (cf. for details http://codesuppository.blogspot.com/201 ... onvex.html)
I have fixed this issue by adding the possibility to merge CCs. The current version makes it possible to produce any number of convex-hulls specified by the user even for multiple-CC meshes (cf. http://codesuppository.blogspot.com/?).
I am not sure whether the latest version of the HACD library was integrated into bullet or not...
User avatar
dphil
Posts: 237
Joined: Tue Jun 29, 2010 10:27 pm

Re: GImpact vs convex-decomposed mesh collision performance

Post by dphil »

Ah, perfect. Seems John noticed the exact same issue. Apparently the code in the Bullet repository is not the latest, so I updated to the newest HACD and using the connectDist parameter it works great, I can cluster everything into one convex hull. You say:
it possible to produce any number of convex-hulls specified by the user
It is possible to specify the number of hulls desired? I don't see this option. Or, rather, do you mean that by setting the nClusters parameter to the desired # of hulls, and setting the connectDist parameter high, one will effectively get nClusters number of hulls?

The only other thing I wonder about is the connectDist value. I had a mesh whose dimensions were roughly 8x3x3. However, I had to set the connectDist value to 32 or higher to get it to cluster everything together. At a value of 31, for example, it would produce 2 clusters. Since every triangle is definitely closer than 31 units from each other, I wonder why this happens. Maybe connectDist does not correspond directly to spatial distance, but to some internal value, or perhaps there is some scaling going on? (though I tried changing setScaleFactor, with no effect)
khaled
Posts: 22
Joined: Mon May 02, 2011 3:26 pm

Re: GImpact vs convex-decomposed mesh collision performance

Post by khaled »

Glad to hear it is working.
Yes, number of cluster = number of hulls.
The mesh is centered and scaled.

const Real invDiag = static_cast<Real>(2.0 * m_scale / m_diag);
for (size_t v = 0; v < m_nPoints ; v++)
{
m_points[v] = (m_points[v] - m_barycenter) * invDiag;
}
The default value for m_scale = 1000.0. The coordinates should in the interval [-2000,2000]^3.