Non-convex / non-convex collision: any hints?

Please don't post Bullet support questions here, use the above forums instead.
User avatar
tasora
Posts: 26
Joined: Mon Aug 15, 2005 8:57 am
Location: Italy

Non-convex / non-convex collision: any hints?

Post by tasora »

Hallo!
I am new to this list, and I really like the
idea of a forum about collision detection.
I've been developing my multibody simulation
software (CHRONO) for years, but only recently I added
some reliable collision-detection.

Following links are some DivX animations of my first
experiments dealing with contacts & collisions
(each web page has the link to the AVI and a short
description - feel free to add comments)

http://www.deltaknowledge.com/gallery/d ... hp?pid=136
http://www.deltaknowledge.com/gallery/d ... hp?pid=137
http://www.deltaknowledge.com/gallery/d ... hp?pid=139
http://www.deltaknowledge.com/gallery/d ... hp?pid=140
http://www.deltaknowledge.com/gallery/d ... hp?pid=141
http://www.deltaknowledge.com/gallery/d ... hp?pid=142
http://www.deltaknowledge.com/gallery/d ... hp?pid=143

Note that all the simulations have been performed
using a custom simplex solver for the LCP problem.
About my simplex method: I developed some optimizations based
on the Dantzig scheme, and I were able to cut
computation times of the simplex solver by exploiting
the sparsity of system's matrices (however now I am
convinced that it isn't worth while optimizing the
simplex method any more, since for systems with
1000+ bodies the only practical way to go is a
fixed-point method like the projective-Gauss-Seidel,
thought approximations must be accepted..).

My collision detection engine uses either OBB and
AABB BVh, and custom methods for primitive pairs
(I am using a box-box algo based on the nice
theory of Kenny E., but there is also box-sphere,
sphere-sphere, etc.)

I am planning to implement the GJK algorithm of the
Bullet libray, in order to have a more general and
versatile handling of all convex shapes. However, given
that GJK returns a single point of contact, I
see that your examples handle cases with _multiple_contacts_
(box-box, for example) by updating a persistent contact
manifold.. I already did this in the past, but I felt
that this was a source of troubles with complex shapes
(instability, jumps, etc.) May be Kenny already
pointed out this fact?

QUESTION...
Now, coming to my question: does someone know a practical,
fast method for computing contacts between generic
dynamic non-convex shapes?

I mean, say you have moving objects made of triangle soups: it
is possible to compute a good set of penetration points
& directions?
Ok, I know: I may use signed distance maps! HOWEVER, they
cannot represent very detailed objects (I use simulations
for engineering, so my multibody sw must simulate also
gears, geneva wheels, and such 'detailed' shapes..)
I bet that there could be something as robust as signed
distance maps, but more precise.. look at the Novodex demos:
the non-convex shape collision demos look very precise even
without the precomputation of maps.. which trick do they use??

regards,

/\\ Alessandro Tasora
/__\\
tasora@mech.polimi.it
http://www.deltaknowledge.com
http://www.deltaknowledge.com/tasora
Erin Catto
Posts: 316
Joined: Fri Jul 01, 2005 5:29 am
Location: Irvine

Post by Erin Catto »

Have you looked at convex decomposition?

http://www.cs.unc.edu/~geom/SWIFT++
kenny
Posts: 32
Joined: Thu Jun 30, 2005 8:49 am
Location: Denmark

Re: Non-convex / non-convex collision: any hints?

Post by kenny »

tasora wrote: Following links are some DivX animations of my first
experiments dealing with contacts & collisions
(each web page has the link to the AVI and a short
description - feel free to add comments)

...
Hmm, maybe I was to slow to get around to read your post, but I think the links are dead by now?

The major problem with using signed distance maps is in my opinion not accuracy, but storage.

In fact if you consider their abillity to represent real-world objects then I would say that they are just as accurate as a polygonal model. The thing is that one is a Eulerian approximation and the other is an Lagrangian approximation. Both can be made as accurate as you please by decreasing the spacing between samples.

In fact I would postulate that the signed distance map is a more faithfull representation of the real-world, because you do not find sharp edges in the real-world (this is a scale argument), the signed distance map allows you to control the smoothness (ie. how sharp edges are) by changing the inter-spacing.

The second major problem with signed distance mas I believe is E-E kind of contacts, these requires fine sampling (leading to the first major problem again:-)

One resolution, which I haven't had time yet to fully explore myself, is sparse representations of signed distance maps. Examples are Hierachical tillings (J. A. B?rentsen), Sparse block grid (R. Bridson), DT-grid (Nielsen and Museth) or RLE (Nielsen and Museth). There is a few tradeoffs, but basically all these allow you to have out-of-box simulations (if you do something else than rigid bodies), they keep storage linear propotial to the area of your surface (same complexity as polygonal model), they are defined on regular fixed size grids.

The drawbacks are you only have a narrow-band, but that is okay since a double representation is often used anyway to quickly prune geometry far away from the collision envelope, as I recall the DT-grid and RLE do not have constant time loopkups, they are logarithmic but with very small constants. The first two tilling approaces need a two stage lookup, perhaps with some hashtabel invovled, but this is still really cheap.

So in summary, yeah you can get fine accuracy with signed distance maps, you can get around the n^3 storage problem, but having said this I do not think this is applicable for games in the near future...at least not large scale configurations...

The best thing I can think of when using signed distance maps is that contact generation becomes so obvious and intuitive (overlooking that penetration depth is a global problem and not a local one). For Lagrangian models contact generation is painfull. It is no longer that obvious what a good representation is. Most people tend to favor principal contacts (ie. pairs of geometric features), from these you get an idea of the local neighborhood of a contact point. Personally I favor a single point of contact, a measure of distance (penetration or separation depending on sign) and a contact normal. This is mostly due to habit and the fact that I got a lot of generic simulator pieces which have been implemented using this representation.

The best principle I have encountered sofar is spatial-temporal coherence. Contacts (and their normals) should move in a piece-wise smooth continuos way when objects move. If contacts tend to flicker or teleport around then your simulation results often jitter and shake too much.

Regarding your question on fast non-convex collision, well there is some tradeoffs to consider. Simple geometry (=convex) is fast, complex geometry (=non-convex) is slower.

I think most people in interactive applications uses in-place geoemtry. For instance some hierarchy of simpler geometries. I did some work in the past trying to find an algorithm for automatic construction of approximating hybrid BVHs, but the damn problem was too painfull. I couldn't beat the quality that an animator could produce making a hand-crafted BVH.

I did manage to create a bottom-up contruction method for building the BVH, the really difficult problem was to create the hybrid volumes of the BVH leaves. Convex decomposition was not an option, too many faces on non-planar surfaces lead to a bundle of small convex pieces. Classical top down splitting methods tends to be surface based and not volume based, so these works rather badly for this problem. I tried a few other things as well, but nothing worked as good as having an animator place a few boxes, spheres and capsules on top of the orignal geometry.
koz
Posts: 7
Joined: Thu Apr 20, 2006 5:50 pm

Post by koz »

Hi everyone,

First off, thanks for this great forum! Reading your posts has given me some great insights into collision detection.

So, I'm interested in intersections between arbitrary non-convex polygonal meshes also. The above discussion is great, thanks Kenny and Erin!

So it seems as though hand crafted BVHs are the way to go for real-time simulations, since automatic convex decomposition of the polygon isn't always a feasible option. Are hand-crafted BVH's what most video games are using today?

Furthermore, I'm wondering if anyone has any good pointers to additional literature on non-convex collision detection. Are there any good automatic techniques that come close in performance to BVH's that are hand-crafted?

Also, does anyone know of any good literature that specifically discusses efficiency and performance of BVH's for rigid-bodies?

Thanks to everyone in advance, and, again, for this forum and your posts in general.
User avatar
Erwin Coumans
Site Admin
Posts: 4221
Joined: Sun Jun 26, 2005 6:43 pm
Location: California, USA

Post by Erwin Coumans »

Hi Koz,
Are hand-crafted BVH's what most video games are using today?
Most games don't use moving concave/triangle meshes. They just uses static triangle meshes, and moving (compound) convex. The biggest issue is stable contact generation.
How would you create stable contacts using discrete collision detection between triangles, especially when there is no volume, conflicting contacts, and tunneling? A BVH doesn't solve such contact generation issues.

So most games use (compound) convexes with volume . See this forum topic: http://www.continuousphysics.com/Bullet ... .php?t=292
Instable contact generation (and tunneling), memory consumption and/or performance are issues. Both Havok and Ageia don't really recommend/support moving triangle meshes. Ageia PhysX has PMAPS, but it takes up memory. For in-house written physics engines, I doubt many will handle moving concave, but perhaps good to ask developers.

There are two cases that make moving concave more doable:
1) Continuous Collision Detection
Doom 3 handles moving 'concave', through advanced continuous collision detection.
2) Volumetric concave, using Signed Distance Maps
Signed Distance Maps take up a lot of memory (see this posting).
So if you use discrete collision detection, I think its better to avoid moving concave triangle meshes. Both performance and contact generation is troublesome, unless you do continuous collision detection.

Would be good to get some other physics developers comment on this.
Anyone?

Erwin

PS: a promising convex decomposition is in the works here:
http://codesuppository.blogspot.com/200 ... ition.html
Erin Catto
Posts: 316
Joined: Fri Jul 01, 2005 5:29 am
Location: Irvine

Post by Erin Catto »

I agree with Erwin. It is tempting to do convex vs. convex because that's what artists create. That is a road I've never taken due to the problems Erwin mentioned.
koz
Posts: 7
Joined: Thu Apr 20, 2006 5:50 pm

Post by koz »

Many thanks for the reply, Erwin.
How would you create stable contacts using discrete collision detection between triangles, especially when there is no volume, conflicting contacts, and tunneling? A BVH doesn't solve such contact generation issues.
So my particular interest is not with arbitrary triangle soups, but with fully connected concave meshes. In other words, closed meshes with volume. What would be the limitations of a BVH in this case?

Also, it seems to me like using compound convexes to approximate concave objects is very limiting. How would you properly detect collisions with a horseshoe for example, or a banana?

Thanks again!
User avatar
Erwin Coumans
Site Admin
Posts: 4221
Joined: Sun Jun 26, 2005 6:43 pm
Location: California, USA

Post by Erwin Coumans »

koz wrote:Many thanks for the reply, Erwin.
How would you create stable contacts using discrete collision detection between triangles, especially when there is no volume, conflicting contacts, and tunneling? A BVH doesn't solve such contact generation issues.
So my particular interest is not with arbitrary triangle soups, but with fully connected concave meshes. In other words, closed meshes with volume. What would be the limitations of a BVH in this case?
The BVH doesn't solve the actual problems, it is just an acceleration structure. The actual problems are penetration depth calculation and stable contact manifold generation (in my opinion).

If you still want to do moving concave with volume, then penetration depth can be calculated using signed distance maps for example. But that takes a lot of memory. There are other approaches, especially if you perform tetrahedralization on your concave volume.
See "Consistent Penetration Depth Estimation for Deformable Collision Response" by Matthias Muller, http://graphics.ethz.ch/~mattmuel
Also, it seems to me like using compound convexes to approximate concave objects is very limiting. How would you properly detect collisions with a horseshoe for example, or a banana?
You don't need that many parts to approximate a banana or horseshoe.
See the previous posted link at http://codesuppository.blogspot.com/200 ... ition.html

Image

And in a game, usually you don't need that much accuracy. A car chassis is usually approximated by a box or a convex. The ragdoll limbs are like cylinders, boxes, sets of spheres. Even approximating shapes with a collection of spheres is a crude but still useful way of doing rigidbody dynamics in games.

Erwin
kenny
Posts: 32
Joined: Thu Jun 30, 2005 8:49 am
Location: Denmark

Post by kenny »

Erwin asked if I would like to make a few comments on this topic, so here goes....

I completely agree that the most used collision detection approach in computer games today is compounds. That is a collection of simpler geometries.

It is very easy to explain to an artist how this works, and further more it allows animators/modellers/game designers etc. to model and design performance. The idea is basically: the more accurate collision detection the more simpler geometries in each compound, and it comes with a performance degradation.

a) It is easy to understand
b) Easy to use
c) Provides a sense of control of the performance

Besides the technique have been used for some time and I would say that today it is well-understood and an intergrated part of the work-flow in most game productions (at least from my experience).

Another important argument here, it is far cheaper and faster to have somebody hand-craft specialized collision geometries than to hire a person to invent something clever. This probably have something todo with short and tight deadlines in the computer game industry...and to ``play it safe''...

In my opinion the next natural extention of compounds is approximating heteregeneous bounding volume hierarchies. Here is a few details on ideas and problems

http://www.diku.dk/publikationer/teknis ... /02-04.pdf

The great thing about this is that they provide one with a gracefull degradation of the shape (LOD) and can be extended to support time-critical collision queries.

I have in fact used these types of hierarchies in the past, although not in a computer game context, but in rule-based virtual prototyping. This company uses these ideas

http://www.3dfacto.dk/

The hierarchies are to a certain extend still hand-craftet, but a lot of user-guided tools exist to assist the modeller. The advantage is that the modeller is still made to believe that traditional compounds are used, in fact the work-flow and tools have not changed one bit from his viewpoint.

In fact the hierarchy construction can be fully automized, as we have done in this OpenTissue demo

http://image.diku.dk/svn/OpenTissue/tru ... terceptor/

More elaborate techniques could be used, such as doing branch-and-bound using a tightness measure of the BVH. Even quite expensive solution methods would be okay. All computations can be done off-line as a pre-computation step and besides the number of simpler shapes are quite small (N < 20-30).

I also completely agree that one of the major problems is contact point generation algorithms. In my opinion it is not well understood what makes a good contact point representation, all though in many cases it is obvious how to approach it, there are no generel solutions that will work in all cases.

I would also like to add to the problem-pile. Geometry handling is in my opion one of the major pain-sticking tasks. All to often one is confronted with artistic (or scanned) models. They got holes, non-triangular faces, self-intersections, all kinds of topological degeneracies. Doing simulations we often desire watertight twomanifold meshes without self-intersections, but we never get it:-( Instead several clean-up tools and even hand-craftet fixing of errors is needed. What I really would like is out-of-the box collision detection methods that work on ugly mesh models. Something like this model

http://image.diku.dk/svn/DataTissue/tru ... ermaid.obj

Trimeshes can be used for this, but there is a performance issue combined with a contact point generation problem.

Even worse try to make an automatic algorithm for decomposing the mermaid mesh into simpler shapes, such as spheres, boxes, cylinders etc.. It is not even obvious what inside-outside is.

So let me summarize some of my viewpoints regarding collision detection in computer games

a) compounds are a standard tool
b) app. heteregeneous BVHs are the next step
c) contact point generation is a major problem
d) geometry handling (in general) is a major problem

Of course much of this discussion hints implicitly at rigid bodies. However, there is lots of other things to consider

1) deformable objects
2) CCD of deformable objects as well as rigid objects
3) variuos hybrid simulations, think of combining water, cloth and rigid bodies...
4) the idea of double representation can be taken even further: collision geometry, visualization geometry, and the computational geometry (for instance a computational grid). The keyword here is ``coupling'' of geometries maybe even at different resolutions...?

/Kenny
koz
Posts: 7
Joined: Thu Apr 20, 2006 5:50 pm

Post by koz »

Thanks for the replies, guys!

I've been getting more and more interested in this problem. I'm wondering if anyone knows of any other automated convex decomposition techniques besides Ratcliff's.

I'm also looking into continuous collision detection methods which incorporate solutions for non-convex rigid bodies. Unfortunately, I can't seem to find much literature on the subject, asside from Redon's paper "Fast Continuous Collision Detection for Rigid Bodies". Any suggestions would be greatly appreciated.

BTW, where can I find some info on how bullet handles continuous collision detection. Does it work for on non-convex objects, or does it require some convex decomposition?

As always, many thanks!
User avatar
Erwin Coumans
Site Admin
Posts: 4221
Joined: Sun Jun 26, 2005 6:43 pm
Location: California, USA

Post by Erwin Coumans »

Brian Mirtich has some good ideas about continuous collision detection / time of impact based methods in his PhD thesis:
http://www.kuffner.org/james/software/dynamics/mirtich/

Especially page 31, estimating the time of impact.
He uses ballistic motion, which complicates calculations a bit. We use piecewise linear approximation.

Gino's and my ideas can be viewed as derivation of Mirtich's ideas:

Gino described Continuous Collision Detection (purely linear, no rotation, so convex cast / sweep) for convex objects. See papers here:
http://www.dtecta.com/interesting/index.html

Bullet has implementations of Redon's algebraic, Gino's GJK/simplex convex cast. Also it included an extension including Rotation extension, as described in this draft paper:

http://www.continuousphysics.com/Bullet ... ection.pdf

See http://www.continuousphysics.com/Bullet ... xCast.html for the 4 different implementations.

Apart from this, your physics loop needs to handle the time of impact results in some way. Either step from time of impact to time of impact, just step to the first time of impact (and loose time), or handle penetrations, or hybrid solutions, time warp (Mirtich) etc.

Erwin
koz
Posts: 7
Joined: Thu Apr 20, 2006 5:50 pm

Post by koz »

Thanks for the pointers, Erwin.

So I took a look at Mirtich's technique for estimating TOI. If I understand correctly, first, he estimates the ballistic motion of the rigid bodies using a quadratic, which requires him to compute an upper bound on the angular velocity. Then he solves for the smallest root to get a lower bound on the collision time. He recomputes the TOI between two objects more frequently as objects approach one another. When the objects are determined to collide between two frames, he then uses an iterative one-side root finding technique to get the exact TOI.

Alternatively, Gino's technique solves for TOI by "shape casting," if I'm using that term correctly. It works by casting a ray towards the Minkowski difference (or CSO) of the two objects, assuming purely linear motion. First, he calculates the direction of the ray representing the relative motion of the two objects. Then, he iteratively clips the ray against support planes of the CSO until it gets below some tolerance.

So I'm still not quite clear on your rotation extension from your draft. I'm sure I just have to think more about it.

As described in his 02 paper, Redon's technique solves for TOI based upon root finding techniques using interval arithmetic. He uses arbitrary screw motions to define the motion between frames. You mention in your draft that he switches from relative to global motion. I'm not quite clear on that, and I was hoping you could elaborate for me.

Anyway, so now I'm trying to compare these techniques. Here's what I've come up with so far:

The most obvious difference is that each technique uses a different in-between motion: ballistic, linear, linear+rotation, and screw. Screw and linear seem the least "accurate," but that's just based on my intuition for now.

In terms of performance, that's where my analysis falters, and I'm hoping you could share some insights.

Gino/your technique integrates easily into existing GJK solvers because it uses the same support mapping function. So intuitively, since GJK is fast, I assume your method is fast too. (That kind of "proof by induction" won't win me any awards anytime soon. ;) )

Mirtich's uses a root finding technique that I've never implemented, nor really heard of, but he says it's better than the Newton-Rhapson approach. I'm wondering where the biggest performance penalties are in this method?

Regarding Redon's technique, I would guess that performance would vary heavily in how good a job the OBBs in the BVH did at culling away feature pairs. Unless I'm mistaken, all feature primitives are tested pairwise between overlapping OBBs. Overall, would I be correct in assuming that this technique is the slowest of those mentioned?

It's also worthy to mention that of all the techniques, Redon's is the only one which supports arbitrary non-convex polygon meshes without using some form of convex decomposition.

So, there's my newbie analysis of this stuff. I'm hoping you, Erwin, or some of the other experts can share their insights with me.

Thanks!
-Koz