Erwin Coumans wrote:
Not true in my opinion. the TOI can still be correctly calculated. However more internal iterations are needed, so the calculation is more expensive.
Sure you can make more internal iterations. However, you need to control how many iterations you are willing to take. You have the same problems and tradeoffs as in every iterative root search algorithm.
So, yes you can get a accurate TOI within some pre-scribed epsilon, but can you make a guarantee to always deliver the result within say 5 iterations?
Another problem with taking internal iterations is that this may be a source to parameter tuning. I am a strong beliver in trying to create out-of-the-box simulators. In the sense they work and deliver high-performance no matter what you throw in their face. As an example many physics game-engines are known for requiring people to manually tweak mass-ratios and external forces, in order to not cause blow-up, and do I need to say anything about penalty methods:-)
Erwin Coumans wrote:
Don't agree. Basically the error of the solver has to be within the envelope/tolerance. A pivoting solver should bound the error properly. If your solver is an iterative solver all you need to to for a higher stack is increase the number of iterations, or increase the quality another way (preconditioning). So it just becomes a bit slower, but hey, you wanted higher stack. Just add a multiprocessor machine to the task. Another solution, which DOOM3 uses is to never integrate the position into penetration. This leads to visible delays, but doesn't cause a halt.
It seems to me that you are making the assumption that we are talking about constraint based simulation. I am not, I was talking generally about how TOIs behave.
In a high stack boxes at the bottom tend to be squezed more closely together (smaller separation) than in the top. This means separation distances go to zero while the frequency of impulse trains go up. Of course in the limit such contacts are ``resting'' (static) contacts and it make sense to deal with these using constraint based simulation.
So, I guess the question here is how to combine a constraint-based simulation and a conservative advancement scheme based on TOIs. I assume that you would ignore TOIs from static constacts and only consider dynamic (colliding) contacts.
The problem with this, as I see it, is how do you determine what contacts are static? Of course you can exploit the results from you constraint solver to do this, but please elaborate on this:-)
Regarding your comments on performance and quality. Iterative methods based on matrix splitting all have linear convergence. This sucks big time moreover you can not predict the actual convergence speed, since it varies depending on your configuration. Besides, finding a preconditioner for a matrix-splitting scheme is difficult (I have tried for some time:-) especially since methods such as Gauss-Seidel and Jacobi are preconditioners themselves (lookup residual error correction methods). To summarize, using something like 10e6 iterations will not be good enough, and no preconditioner really exist.
Direct methods (i.e. pivoting methods) tend to have quadratic convergence and they are as such very attractive. However the cost of each iteration is often similar to solving a linear system.
Erwin Coumans wrote:
This can be solved: Time warp splits the groups, so it is not a global dependency problem. Then again, for the small time steps, there is the envelope/tolerance which can be tweaked.
Yeah you can break dependencies between independent groups. And yes time-warp seems to be a very good idea for conservative advancement. However, I want to do simulations with 10.000 objects all being in mutally dependent contact. In such a context time-warp will just give you a big book-keeping overhead.
Conservative advancement based on TOIs and constraint based simulation may solve the problems Mirtich had with impulse-based simulation in this context. However I would like some details on such a scheme:-)
I have a hard time seeing how accurate TOIs can be computed while at the same time keeping a bound on performance. If an epsilon solution is used I have some difficulty in seeing that this would not end up in endless parameter tuning.