help: collision: finding "first-contact" time and positions

Please don't post Bullet support questions here, use the above forums instead.
bootstrap
Posts: 12
Joined: Tue Aug 09, 2011 1:08 am

help: collision: finding "first-contact" time and positions

Post by bootstrap »

I have finally arrived at that fateful and painful point in engine development when I need to make objects "collide correctly". Both broad phase and narrow phase collision detection are working fine now, for both "convex" and arbitrarily irregular "concave" objects. That was enough challenge and work by itself, but now another annoying obstacle presents itself.

Having detected that a collision has happened, I need to "back up" the object-pair to somewhere after the previous frame (where the object-pair were not in collision) but before the current frame (where the object-pair are in collision). This may need to be done iteratively to converge on reasonably precise "first-contact" time and positions, so a good technique must be both precise and fast.

Here is what happens now. The engine still has the "local-to-world transformation matrix" from last frame, as well as the equivalent for this frame. The engine also has an "intermediate transformation matrix" that accumulates all object rotations and translations since the previous frame. This "intermediate transformation matrix" was multiplied by the previous-frame "local-to-world transformation matrix" to generate the current-frame "local-to-world transformation matrix" that was then applied to the local-coordinates of the vertex positions [and surface vectors] to rotate and move the object to the new location where it was just found to be "in collision" with another object.

I assume what I need is a way to create a "fractionalized" (fractional-frame) version of the "intermediate transformation matrix" by "multiplying" (in some sense) a "fraction-of-frame variable" by the "intermediate transformation matrix" to generate a new "intermediate transformation matrix" that contains only that fraction of the rotations and translations accumulated in that "intermediate transformation matrix".

I could then multiply that "intermediate transformation matrix" by the previous-frame "local-to-world transformation matrix" to generate a new "local-to-world transformation matrix" for the desired fractional frame time, which could then be applied to array of the local-coordinate vertices in the object to generate a new array of world-coordinate vertices.

Once this is done to both objects, the engine could then run collision detection on that object-pair again, and hopefully find they are "very, very close to collision... or just barely in collision", at which point physics could begin (and generate a response for the rest of the frame time).

Now, my first question is... is this even possible? Is there a way to take that "intermediate transformation matrix" and "multiply" it (in some sense) by a fraction - to generate a matrix that would have the same effect as reducing the contributions of all individual rotates and translates applied to that matrix by a specific fraction?

If so, what kind of matrix operation does this?

Perhaps viable alternatives to this approach exist, but I don't see them. The non-viable alternatives (due to much slower speed) that I can identify are:

#1: To queue up all the individual rotate and translate operations applied to every object in every frame, just in case a collision happens. Then I could apply fractional versions of these operations to an identity matrix to accumulate a new fractionalized version of the "intermediate translation matrix", which I could then apply to the previous-frame "local-to-world translation matrix" to generate a fractional-frame "local-to-world translation matrix" to apply to the vertices to create new vertex coordinates to re-perform collision detection with.

#2: To queue up all the individual forces applied to the object during the previous frame, (which then led the physics subsystem to update linear and rotational velocities and accelerations, which then resulted in those rotation and translation operations mentioned in #1 above).

My problem with both of these approaches is... they add a LOT more processing to a process that already threatens to become a "frame buster" (extend frame processing beyond the next vertical sync, and thereby "lose one frame and studder"). The engine has already done all that work, so it seems the "fractionalize the intermediate translation matrix" approach is a lot more practical. Assuming there is such an operation, which intutively it certainly seems there should be.

Also note these unfortunate compounding facts. It may be necessary to "iterate" to "converge upon" a precise [enough] first-contact time and position. Which means, whatever operations must be performed here, might need to be executed several times, including re-transformation of all vertices and running narrow-phase collision detection on this object-pair. I'd like to believe we can compute the correct frame-fraction to find a precise first-contact time and position immediately, the first try, without iteration. But with objects both rotating and translating, I am very dubious this is possible. And even if this is possible, when one or both objects are arbitrarily irregular "concave" objects my engine must execute its much slower "concave collision routine" instead of its blazing fast GJK "convex collision routine". I just can't see how there's enough time to allow any waste in this process, even if the whole shebang is offloaded to another CPU core. And if a number of object-pairs are in collision on a given frame... just how many CPU cores do we have to spare? Anyway, I think I've convinced myself a fast technique is crucial.

One [hopefully] important note: My transformation matrices only contain rotations and translations. My engine does support skew, shear, non-linear scale operations, but those operations are performed independently, without modifying the transformation matrix. This effectively separates "operations that modify the object" (skew/shear/scale) from "operations that do not modify the object, but just rotate and move it around" (rotate/translate). This also assures the transformation matrices are as clean and regular as possible, assures transformations of surface vectors (normals, tangents, bitangents) always produce the correct results, and eliminates any need to compute-and-carry-around or compute-on-demand any inverse matrices. And maybe, just maybe (or hopefully), the fact that my transformation matrices are pristine clean and regular means creating "fractionalized" versions is easy (and all I need is to know how). What say the math wizards in this forum?

I assume everyone who ever faces collision detection and response encounters this same problem. Out of curiousity, how is this process usually done?
bone
Posts: 231
Joined: Tue Feb 20, 2007 4:56 pm

Re: help: collision: finding "first-contact" time and posit

Post by bone »

bootstrap wrote:I assume everyone who ever faces collision detection and response encounters this same problem.
Yup.

First off: a fractional rotation using matrices would usually involve converting the 3x3 matrix into a quaternion or axis&angle. Apply the fraction there, and then convert back to the matrix. I hope that helps, I didn't read the entire problem description carefully due to time.

There are at least a couple approaches to handling the problem when the collision happened. You might search for TOI (time of impact). An alternate approach is to give up and allow temporary penetration and use penalty forces or similar to eventually separate the objects. In other words, don't kill yourself trying to break up a normal time step - it can actually become impossible in some situations, or nearly so, to solve the continuous case. Not to mention it screwing up your integration routine.
qneill
Posts: 1
Joined: Mon Aug 22, 2011 9:48 pm

Re: help: collision: finding "first-contact" time and posit

Post by qneill »

I started googling around and ended up back here:

http://www.bulletphysics.org/Bullet/php ... =&f=4&t=20
bootstrap
Posts: 12
Joined: Tue Aug 09, 2011 1:08 am

RE: help: collision: finding "first-contact" time and posit

Post by bootstrap »

Well, it may indeed work to always compute the "axis and angle" and then back that up. I'm not quite sure whether I can interpolate the translation in the same way though. I'll have to think about that.

However, what might be better is to back up just a bit further (in the engine)... back beyond the "math level" (the rotation and translation) and slightly into the "physics level". What I'm saying is this.

On each frame any number of influences can perturb an object - gravitational attraction, magnetic attraction-or-repulsion, static attraction, thrust, lift, drag, friction, wind speeds, collisions and so forth. Having finished the "collision-detection" part of the engine (except for finding exact first-contact time and positions), I am just starting to get ready to write the actual "physics engine" part.

Up until now, I've just been testing the engine by directly rotating and translating objects. Clearly that will change when I start adding subsystems for each of these influences. What it appears like now (perhaps being a bit naive), is that each influence subsystem will impose two force vectors on the object - a linear force-vector and a rotational force-vector (both relative to world-coordinates). As far as I can tell, all these force-vectors pretty much sum nicely, cleanly and without undue complication. Thus, after the engine sums all the forces from all the known influences, it ends up with one linear force vector and one rotational force vector to apply for the next frame time.

Now, if the engine finds itself in collision, it should be able to back up to the previous frame, multiply those two force-vectors by any fraction we wish, then refer to the object shape, size, material, mass, center-of-mass and whatever else is relevant to compute one [arbitrary-axis] rotation and one translation operation to generate a new "intermediate translation matrix", which can then be muliplied by the previous-frame "local-to-world transformation matrix" to get a transformation-matrix that will transform all the [necessary] vertices to the intermediate-frame positions where collision-detection and separation/penetration can be performed again.

This is a bit of extra overhead compared to just fractionalizing the previous rotation and translation operations, and maybe will have the exact same result. I guess the key point is this. If the algorithm that applies the force-vectors to the previous-frame velocity-vectors to generate new velocity-vectors produces a rotation around an arbitrary axis (as presumably it will do naturally), then maybe just fractionalizing those previous rotation/translation operations is accurate and sufficient. However, if that physics process generates a conventional (series of three ordered) rotation operations, then maybe going back to the force information from the previous frame is easier.

Or so it seems at the moment. The devil is likely in the physics details. Or is it "fizzix details"?