New paper about impulse-based dynamic simulation

Please don't post Bullet support questions here, use the above forums instead.
Jan Bender
Posts: 111
Joined: Fri Sep 08, 2006 1:26 pm
Location: Germany
Contact:

New paper about impulse-based dynamic simulation

Post by Jan Bender »

Hi,

I just published my new paper with the title

"Fast Dynamic Simulation of Multi-Body Systems Using Impulses"

on my homepage

http://www.impulse-based.de

The paper shows how all kinds of joint constraints are realized in an impulse-based simulation. Furthermore a new method is presented that allows to compute all required impulses by using a system of linear equations. This method is very fast. We have compared it with the method of David Baraff presented in "Linear-Time Dynamics using Lagrange Multipliers". The simulation of a tree with 255 joints was more than 5 times faster with my new method.

So maybe this is interesting for you.

Jan
User avatar
Erwin Coumans
Site Admin
Posts: 4221
Joined: Sun Jun 26, 2005 6:43 pm
Location: California, USA
Contact:

Re: New paper about impulse-based dynamic simulation

Post by Erwin Coumans »

Hi Jan,

The picture on your website explains a lot.

Bullet also has a several constraints using Impulse Based, including a Generic D6 Constraint, that allows the user to specify the state for each axis independently: either totally Locked, totally Free or Limited. This allows all kind of constraints in one class. I try to understand the important differences with your approach (and if its worth adding them)

Did you compare the practical results using your 'look-ahead and integrate' with just doing all calculations based on current positions (without look-ahead and without step 2) ? Does it converge much faster? Less penetration/joint (positional) error?

Thanks a lot,
Erwin


Image
Jan Bender wrote:Hi,
I just published my new paper with the title
"Fast Dynamic Simulation of Multi-Body Systems Using Impulses"
on my homepage
http://www.impulse-based.de
The paper shows how all kinds of joint constraints are realized in an impulse-based simulation.
Jan
Jan Bender
Posts: 111
Joined: Fri Sep 08, 2006 1:26 pm
Location: Germany
Contact:

Post by Jan Bender »

Hi Erwin,
Bullet also has a several constraints using Impulse Based, including a Generic D6 Constraint, that allows the user to specify the state for each axis independently: either totally Locked, totally Free or Limited. This allows all kind of constraints in one class. I try to understand the important differences with your approach (and if its worth adding them)
I should take a look at this class. How did you realize the joint limits?

Did you compare the practical results using your 'look-ahead and integrate' with just doing all calculations based on current positions (without look-ahead and without step 2) ? Does it converge much faster? Less penetration/joint (positional) error?
If i understand you correctly, the error is just corrected after it occured by iterating over the time. So if a constraint is not satisfied an impulse is applied and then the time step is made. In this way it can not be guaranteed that all constraints are satisfied but it could speed-up the simulation a lot. I will try this and tell you the result.

Jan
User avatar
Erwin Coumans
Site Admin
Posts: 4221
Joined: Sun Jun 26, 2005 6:43 pm
Location: California, USA
Contact:

Post by Erwin Coumans »

Jan Bender wrote: I should take a look at this class. How did you realize the joint limits?
Joint limits (linear+angular) on the Generic 6 DOF constraint are handled exactly the same as contact constraints in Bullet. If an axis has a limited range of motion it becomes a unilateral constraint (positional + velocity).
Note that a fully locked (upper limit == lower limit) axis becomes a bilateral constraint. Please check the codesnippet at the end of this posting, it shows the linear limit solving of one axis, taken from Bullet/src/BulletDynamics/btGeneric6DofConstraint.cpp
If i understand you correctly, the error is just corrected after it occured by iterating over the time. So if a constraint is not satisfied an impulse is applied and then the time step is made. In this way it can not be guaranteed that all constraints are satisfied but it could speed-up the simulation a lot. I will try this and tell you the result.
In my experience with several physics engines it is not easy to say that an approach can guarantee constraint satisfacton by just taking one concept into account, there are too many details that all interact and matter.

If I understand correctly your approach is still discrete, so it suffers from tunneling: for fast moving objects, the 'predicted' position in the next timestep can totally fail to measure any error/overlap?
This is the discrete mode of Bullet, which uses indeed the very popular 'retroactive determination' by correcting the penetration/constraint drift error afterwards. I recognize there is a difference in your approach, because you seem to try to prevent the error by predicting a future error. However, this prediction itself might be inaccurate due to tunelling/discretization.

However Bullet also has continuous collision detection, which allows time of impact calculation, so you can step up until the time, and avoid generating the penetration error in the first place. This solves the actual problem of tunneling/missing collisions for fast moving object. This can also be applied to constraints, but this isn't implemented in Bullet yet (temporal constraints).

Have you considered predicting and stepping up to the the actual time of the discontinuity? For collisions this discontinuity would be the time of impact and for other joints, it would be the time when the constraint hits the limit.

Thanks,
Erwin

Code: Select all

		//velocity error (first order error)
			btScalar rel_vel = m_jacLinear[i].getRelativeVelocity(m_rbA.getLinearVelocity(),angvelA, 
																	m_rbB.getLinearVelocity(),angvelB);
		
			//positional error (zeroth order error)
			btScalar depth = -(pivotAInW - pivotBInW).dot(normalWorld); 
			btScalar	lo = -1e30f;
			btScalar	hi = 1e30f;
		
			//handle the limits
			if (m_lowerLimit[i] < m_upperLimit[i])
			{
				{
					if (depth > m_upperLimit[i])
					{
						depth -= m_upperLimit[i];
						lo = 0.f;
					
					} else
					{
						if (depth < m_lowerLimit[i])
						{
							depth -= m_lowerLimit[i];
							hi = 0.f;
						} else
						{
							continue;
						}
					}
				}
			}

			btScalar normalImpulse= (tau*depth/timeStep - damping*rel_vel) * jacDiagABInv;
			float oldNormalImpulse = m_accumulatedImpulse[i];
			float sum = oldNormalImpulse + normalImpulse;
			m_accumulatedImpulse[i] = sum > hi ? 0.f : sum < lo ? 0.f : sum;
			normalImpulse = m_accumulatedImpulse[i] - oldNormalImpulse;

			btVector3 impulse_vector = normalWorld * normalImpulse;
			m_rbA.applyImpulse( impulse_vector, rel_pos1);
			m_rbB.applyImpulse(-impulse_vector, rel_pos2);
Jan Bender
Posts: 111
Joined: Fri Sep 08, 2006 1:26 pm
Location: Germany
Contact:

Post by Jan Bender »

Hi,
Joint limits (linear+angular) on the Generic 6 DOF constraint are handled exactly the same as contact constraints in Bullet. If an axis has a limited range of motion it becomes a unilateral constraint (positional + velocity).
In the same way I plan to handle joint limits with my method. Thanks for the code snippet. I think I should take a closer look at the whole class to understand the meaning of all parameters.
In my experience with several physics engines it is not easy to say that an approach can guarantee constraint satisfacton by just taking one concept into account, there are too many details that all interact and matter.
This is one advantage of my method. Since you correct the future error you can guarantee that after the next time step the constraint is satisfied within a given tolerance.
If I understand correctly your approach is still discrete, so it suffers from tunneling: for fast moving objects, the 'predicted' position in the next timestep can totally fail to measure any error/overlap?
Since I still use a discrete collision detection I have the tunneling problem. But this can just happens for collisions. Joint constraints do not fail. You can even take a model where all joint constraints are totally destroyed and the simulation method repairs all constraints. You just have to take care that the impulses don't get to large, otherwise you get numerical problems.

Because of this property of the method the iteration can be aborted before the computation is finished. The joints will not drift apart even if just a small number of iterations is used. I tried this out using a tree with 255 joints (one level more than this one: http://www.impulse-based.de/videos/tree127.avi) and just used 10 iterations. The simulation was 5 times faster than real-time and you see no drift.
This is the discrete mode of Bullet, which uses indeed the very popular 'retroactive determination' by correcting the penetration/constraint drift error afterwards. I recognize there is a difference in your approach, because you seem to try to prevent the error by predicting a future error. However, this prediction itself might be inaccurate due to tunelling/discretization.
The prediction is just the motion the bodies would have without any constraint. This leads to a correct solution as my professor has proven in http://i31www.ira.uka.de/docs/InternalReport2005.17.pdf. Just the tunneling effect of fast bodies is a problem. Anyway I wanted to try the collision detection of Bullet. So probably the tunneling problem will soon be history :wink:

Have you considered predicting and stepping up to the the actual time of the discontinuity? For collisions this discontinuity would be the time of impact and for other joints, it would be the time when the constraint hits the limit.
This makes only sense for collisions. The actual time of discontinuity of the joints is always "now" since the joints drift immediately apart. But this is no problem the method corrects this accurately.

Jan
User avatar
Erwin Coumans
Site Admin
Posts: 4221
Joined: Sun Jun 26, 2005 6:43 pm
Location: California, USA
Contact:

Post by Erwin Coumans »

Jan Bender wrote:
Have you considered predicting and stepping up to the the actual time of the discontinuity? For collisions this discontinuity would be the time of impact and for other joints, it would be the time when the constraint hits the limit.
This makes only sense for collisions. The actual time of discontinuity of the joints is always "now" since the joints drift immediately apart. But this is no problem the method corrects this accurately.
I think it makes sense for joints too, once their bodies are involved in high-impact collisions, for example: if a high speed car hits a ragdoll in the middle of the timestep, you need to step up to that time of impact, solve the collision, but also _all_ the constraints for that ragdoll. Otherwise the ragdolls limbs will be completely violated. That is one of the reasons why we used 'Featherstone constraints' for ragdolls in some of my previous games. (reduced coordinates)

It also makes sense for joint limits: When you introduce joint limits, the discontinuities can happen in the middle of the timestep. And just like collision detection, I think you have to step the constraint up to the time that it hits the limit. This is because if the bodies for that constraint are involved in other constraints (either joint or collisions) it becomes a global problem, not just local.

I can add some picture, or is my attempt to explain clear?
Thanks,
Erwin
Jan Bender
Posts: 111
Joined: Fri Sep 08, 2006 1:26 pm
Location: Germany
Contact:

Post by Jan Bender »

I think it makes sense for joints too, once their bodies are involved in high-impact collisions, for example: if a high speed car hits a ragdoll in the middle of the timestep, you need to step up to that time of impact, solve the collision, but also _all_ the constraints for that ragdoll. Otherwise the ragdolls limbs will be completely violated. That is one of the reasons why we used 'Featherstone constraints' for ragdolls in some of my previous games. (reduced coordinates)
Okay, in this case it is clear. I always correct the constraints when I step up to the time of impact. I meant that it makes no sense if there is no collision or joint limit (which is a kind of collision).
I can add some picture, or is my attempt to explain clear?
No, it is totally clear and I agree with you. In the case of collision I do the same. I didn't implement joint limits yet. At the moment the only way to simulate them with my method is to define additional collision objects. So even with these joint limits the constraints are corrected at the time of the discontinuity.

Jan
Antonio Martini
Posts: 126
Joined: Wed Jul 27, 2005 10:28 am
Location: SCEE London

Re: New paper about impulse-based dynamic simulation

Post by Antonio Martini »

Jan Bender wrote:
Furthermore a new method is presented that allows to compute all required impulses by using a system of linear equations. This method is very fast. We have compared it with the method of David Baraff presented in "Linear-Time Dynamics using Lagrange Multipliers". The simulation of a tree with 255 joints was more than 5 times faster with my new method.
both the Baraff's method and the Featherstone algorithm have O(n) complexity for equality constraints, so you solve a system of linear equations in less than O(n) and this by doing the LU decomposition? am i missing something? whats the computational complexity of your method? and what are the storage requirements?

cheers,
Antonio
Jan Bender
Posts: 111
Joined: Fri Sep 08, 2006 1:26 pm
Location: Germany
Contact:

Post by Jan Bender »

both the Baraff's method and the Featherstone algorithm have O(n) complexity for equality constraints, so you solve a system of linear equations in less than O(n) and this by doing the LU decomposition? am i missing something? whats the computational complexity of your method? and what are the storage requirements?
I use PARDISO to solve the system of linear equations. I can't tell you which complexity it has. It is optimized for sparse matrices but I don't think that it has linear complexity in the worst case. Baraff can guarantee this. But the complexity that you have in praxis for a special model is something different.

For the simulation I used the model proposed by Baraff himself. If you simulate a special model it is not guaranteed that a O(n) algorithm is faster than an algorithm with a higher complexity. Only if n increases there will be a point where the O(n) algorithm gets faster. Baraff has to solve a bigger matrix than me and has to solve a system of differential equations with continuous forces. That's why his method is slower than mine when simulating the tree with 255 joints.

Jan
Antonio Martini
Posts: 126
Joined: Wed Jul 27, 2005 10:28 am
Location: SCEE London

Post by Antonio Martini »

Jan Bender wrote:
both the Baraff's method and the Featherstone algorithm have O(n) complexity for equality constraints, so you solve a system of linear equations in less than O(n) and this by doing the LU decomposition? am i missing something? whats the computational complexity of your method? and what are the storage requirements?
I use PARDISO to solve the system of linear equations. I can't tell you which complexity it has. It is optimized for sparse matrices but I don't think that it has linear complexity in the worst case. Baraff can guarantee this. But the complexity that you have in praxis for a special model is something different.

For the simulation I used the model proposed by Baraff himself. If you simulate a special model it is not guaranteed that a O(n) algorithm is faster than an algorithm with a higher complexity. Only if n increases there will be a point where the O(n) algorithm gets faster. Baraff has to solve a bigger matrix than me and has to solve a system of differential equations with continuous forces. That's why his method is slower than mine when simulating the tree with 255 joints.
Jan
i dont know the details of the Baraff's method, however Feastherstone gets faster than any method when n > 7 if im not wrong, where n is the number of DOFs. Of course Feastherstone covers only branched chains but thats what you used for your tests isnt it?
it can also be extended in order to deal with contacts:
http://research.scea.com/research/pdfs/ ... DC2004.pdf

a method with a worse complexity can be faster simply because its has been better implemented. So you can take any algorithm, implement the best one in a very bad way and show that the worst one is "much better". I depends on how much you are willing to optimize in the end, just change PARDISO with something else and the total speed will drastically change.
I think that before stating that your algorithm is better, at least an analysis of the computational complexity, storage requirements and in general an analisys of why it is faster and when is the minimum that it is required. Otherwise in the best case it would be reaching the right conclusions from the wrong premises.

I read you paper very quickly, so i apologise in advance if i have missed something.

cheers,
Antonio
Jan Bender
Posts: 111
Joined: Fri Sep 08, 2006 1:26 pm
Location: Germany
Contact:

Post by Jan Bender »

i dont know the details of the Baraff's method, however Feastherstone gets faster than any method when n > 7 if im not wrong, where n is the number of DOFs. Of course Feastherstone covers only branched chains but thats what you used for your tests isnt it?
Faster than ANY method? I would like to see a proof for that.
a method with a worse complexity can be faster simply because its has been better implemented. So you can take any algorithm, implement the best one in a very bad way and show that the worst one is "much better". I depends on how much you are willing to optimize in the end, just change PARDISO with something else and the total speed will drastically change.
That is clear. Since I have a very similar matrix structure as Baraff I am sure that I can solve my system of linear equations in the same way as him. In this case my algorithm would have a linear complexity. But this will probably be the topic of my next paper :wink:

I think that before stating that your algorithm is better, at least an analysis of the computational complexity, storage requirements and in general an analisys of why it is faster and when is the minimum that it is required. Otherwise in the best case it would be reaching the right conclusions from the wrong premises.
I never said it is better, it was just faster for the tree model. It always depends on the application which method is the best for you.

On the other side you can't say that a method with linear complexity is the fastest for every model. A complexity of O(n) always means that you have k*n computations where k is a constant. So if k is very big the method can be very slow for small models. But it depends on the method what is "small".

Jan
Antonio Martini
Posts: 126
Joined: Wed Jul 27, 2005 10:28 am
Location: SCEE London

Post by Antonio Martini »

Jan Bender wrote:
i dont know the details of the Baraff's method, however Feastherstone gets faster than any method when n > 7 if im not wrong, where n is the number of DOFs. Of course Feastherstone covers only branched chains but thats what you used for your tests isnt it?
Faster than ANY method? I would like to see a proof for that.
i think it was n = 9. In any case detailed analysis of the complexity of the Featherstone algorithms and other algorithms can be found in:

Robotic Dynamics Algorithms
Roy Feathestone
Kluwer Academic Press

Efficient Dynamic Simulation of Robotic Mechanisms
Kathhryn W. Lilly
Kluwer Academic Press

of course we are in the context of "exact" solvers.

cheers,
Antonio
Jan Bender
Posts: 111
Joined: Fri Sep 08, 2006 1:26 pm
Location: Germany
Contact:

Post by Jan Bender »

i think it was n = 9. In any case detailed analysis of the complexity of the Featherstone algorithms and other algorithms can be found in:

Robotic Dynamics Algorithms
Roy Feathestone
Kluwer Academic Press
I know this one but I don't think that there is proof that considers all methods that exist.

Anyway I am sure that the Featherstone algorithm is one of the best and fastest methods. But it has also some disadvantages as every other method too.
of course we are in the context of "exact" solvers.
My method converges also to the exact solution. There is a proof for this in "On the Convergence and Correctness of Impulse-Based Dynamic Simulation".

Jan
Antonio Martini
Posts: 126
Joined: Wed Jul 27, 2005 10:28 am
Location: SCEE London

Post by Antonio Martini »

Jan Bender wrote:
i think it was n = 9. In any case detailed analysis of the complexity of the Featherstone algorithms and other algorithms can be found in:

Robotic Dynamics Algorithms
Roy Feathestone
Kluwer Academic Press
I know this one but I don't think that there is proof that considers all methods that exist.
algorithms typically used in robotics and considered very efficient by the experts in the relevant field are considered.So published algorithms at the time of writing were considered.
So at least there is some reasoning behind the claims. Im not trying to state which algorithm is the best, im just stating that in order to show that a solver is more efficient than another one, a proper analysis must be conducted, those books are a good example on how such analsys maybe conducted and its not sufficient to take 2 particular implementations and compare them like you did. If you compare your algorithm to the Baraff's one i presume that you assumed a similar context of application and so you assumed that they are comparable. Also stating that a method is very simple to implement when it requires a sparse LU decomposition package seems a contradiction to me. If PARADISO is used at least we should know how it works, as that may well be the reason why it is faster in that case.
I have the maximum respect for your work, i will read it in more detail as i have more available time, for now i just dont feel very motivated to do it for the reasons i have exposed.

cheers,
Antonio
Last edited by Antonio Martini on Thu Nov 16, 2006 10:59 am, edited 1 time in total.
Antonio Martini
Posts: 126
Joined: Wed Jul 27, 2005 10:28 am
Location: SCEE London

Post by Antonio Martini »

Jan Bender wrote:
i dont know the details of the Baraff's method, however Feastherstone gets faster than any method when n > 7 if im not wrong, where n is the number of DOFs. Of course Feastherstone covers only branched chains but thats what you used for your tests isnt it?
On the other side you can't say that a method with linear complexity is the fastest for every model. A complexity of O(n) always means that you have k*n computations where k is a constant. So if k is very big the method can be very slow for small models. But it depends on the method what is "small".
Jan
yes however if i see a sparse LU decomposition i think that the solver i designed for large n .

cheers,
Antonio
Post Reply