Interesting, bit outdated Physics Engine comparison paper

Please don't post Bullet support questions here, use the above forums instead.
User avatar
Erwin Coumans
Site Admin
Posts: 4221
Joined: Sun Jun 26, 2005 6:43 pm
Location: California, USA
Contact:

Interesting, bit outdated Physics Engine comparison paper

Post by Erwin Coumans »

http://www.cs.umu.se/education/examina/ ... gRolin.pdf

Most of the information about the Bullet engine is outdated, the last year many features/improvements have been made thanks to all contributors.

Bullet Physics has:
- Multiplatform support: win32, Mac OS X, Linux, Playstation 3, Irix, FreeBSD etc.
- Support for Compound objects
- Hinge, ball socket, generic D6 constraint with limits to simulate slider, fixed joint, ragdoll constraint etc.
- Vehicles
- some GPU physics experiments
- improved EPA penetration depth implementation
- No dependency on any ODE part, but optional comparison integration with quickstep & SOLID EPA/GJK
- Collision shapes include capsule, cylinder, convex mesh & static and moving (GIMPACT) concave mesh, box, sphere and custom
- COLLADA Physics support/import
- Documentation in Bullet_User_Manual.pdf, although much more is needed

Planned for future is
- improved generic 6DOF joint with motors
- better documentation
- COLLADA Physics snapshot export/import at any stage during simulation (debugging)
- cloth demo
- improved performance
- much more :)
Last edited by Erwin Coumans on Tue Dec 05, 2006 4:11 pm, edited 1 time in total.
KenB
Posts: 49
Joined: Sun Dec 03, 2006 12:40 am

Post by KenB »

Yes, it is important to note that much of the work was done in 2005 and much has happened since.

The important result of this thesis is not the ranking, which was made for a special reason, i.e. user integration of a physics engine for Virtools, to be used in VR/Game projects at the Interactive Institute.

Instead, it is really the methodology that is important, with simulation experiments designed to analyze the physical validity of the engine, rather than subjective "plausibility" (hate that word...).

The results of the thesis are just a first step though, and the plan is to design a set of such examples in e.g. Collada format and specify exactly what to measure, so that some real comparison can be done between engines, and more importantly, different flavours of solvers, friction models, etc.


/Kenneth
Julio Jerez
Posts: 26
Joined: Sat Jul 23, 2005 12:56 am
Location: LA

Post by Julio Jerez »

This is a very interesting comparison indeed.
Since the engine that comes out the worse on that benchmark is Newton, I hope that at least you give me the opportunity to explain a few points.
These are some quotes from the document.
Page 35
reveals that Newton looses energy and therefore the angular momentum is also lost.
This is also visible in the simulation, the block will eventually stop moving.

Page 36
Figure 4.5 shows that with ODE, the rotational energy increases in the same way as the
angular momentum. With Newton the rotational energy oscillates and slowly decreases.
Novodex keeps the rotational energy at the initial level. When observing that and the
fact that the error in angular momentum is oscillating it is likely that Novodex only
preserves the value of the angular momentum vector and not the correct values in the
vector. This will ignore the gyroscopic effect.

Page 38
? Newton also gives the object velocity in the opposite direction. One thing that
differ from the simulation with ODE is that the energy and velocity is smaller.
The simulation also shows that the velocity is reduced in each time step which
means it must be damped. When observing the velocity values for each time step
it is found that the velocity is reduced to a factor of 0.9999 every time step. This
behavior is independent of the restitution.

Page 43
In figure 4.12 the position of the pendulum is plotted. The graph shows the position
in the x-dimension which is the interesting dimension since the pendulum is rotating
around the y -axis. The graph shows what was suspected when looking at the energy in
figure 4.11, both Novodex and Newton decrease the pendulum movement. Newton will
eventually stop the pendulum from moving. Compared to the other ODE only looses
little of it?s movement.

....
And so on.
I only want to say the maybe the setting for angular and linear damping were not set to zero on creation of the bodies.
I would say the tests are measuring the default value of K not conservation of momentum.
..................

The other point is that and I quote:
The tests were performed with up to 20 joints in a chain. Novodex behaved well in
the test. When the number of joints were close to 20 some elastic effects could be
seen. Newton behaved very bad in this test, when there were more than three joints in
the scene the simulation wasn?t stable and the spheres moved around in a chaotic way.
It says the Newton cannot simulate three joints in a stable manner
This is a very big surprised to me. I am not claiming Newton is infallible, as I had stated the conditions that makes the engine unstable, And I never claimed that is was faster than any other engine.
However I am fairly sure that the engine can simulate more than three joints, so I went to my site to check some demos that are still downloadable from their websites, and that are using joints with the same version of Newton that was used for that benchmark.

http://www.dave.serveusers.com/oxNewton.html
http://newton.delphigl.de/playground_buggy_hinges.wmv
http://www.delphigl.de/misc/npg_skinned ... tapult.wmv
http://www.gametrailers.com/player.php? ... ov&pl=game
http://www.youtube.com/watch?v=kpJNiEnpCrM
http://www.youtube.com/watch?v=3Z_zxZlgtps
http://www.lri.fr/~devert/videos

You would think some of those demos do show more than 3 stable joints, with different masses ratios. Many of the videos and demos have executables and also runtime code that can be downloaded from their respective websites. Obviously these demos show a big discrepancy with the finding of the academic research.
I could go on but it sufices to say that considering that several thousands people are using Newton,
I would think that an ill formulation of the constraint solver as bad as the paper claim will show up more often with other users regrless of the amount of tweaking the regular users are doing.
Don't you you agree on that or am I wrong?


..............................
KenB wrote: The important result of this thesis is not the ranking, which was made for a special reason, i.e. user integration of a physics engine for Virtools, to be used in VR/Game projects at the Interactive Institute.

Instead, it is really the methodology that is important, with simulation experiments designed to analyze the physical validity of the engine, rather than subjective "plausibility" (hate that word...).
/Kenneth
You said that the test try to prove the validity of the engine, would you agree with me that the experiment is flaw since it does not really measure the conservation of momentum but the damping coefficient added to the integrator?
which in the case of Newton can be set to zero but it was not.

Finally:
Novodex has the highest grade and is the first choice for further tests. One of the
few negative things about this engine is that it is only free for non-commercial use.
ODE is also a very capable engine and completely free since it is open-source. Therefore
ODE was also chosen for the runtime tests. The third engine is Newton Game Dynamics.
The reason that it was chosen is mainly that it is free for commercial use and it
also got a higher grade in the evaluation than True Axis.
The three engines chosen for further testing are:
1. Novodex
2. ODE
3. Newton Game Dynamics
It is almost like it is apologizing to the public for given me a third place and the only thing worth a value in Newton is that it is free.

I really think test like this may be very useful if they were impartial.
Unfortunately I do not think they are impartial at all.

Maybe now the there are so many open source engines, all far and beyond superior to Newton according to the popular opinion of the experts of the establishment.
Perhaps the pool to select from is bigger and better for the next test.

So please I am going to ask very respectfully that unless the conditions of these tests are made public, leave Newton out of the circus.

So please feel free to prove me wrong on this.
Eternl Knight
Posts: 44
Joined: Sun Jan 22, 2006 4:31 am

Post by Eternl Knight »

Having read the document, Julio, I think you are a little off base in your analysis of the document. Whether this is deliberate or not, I cannot say, but your comments are somewhat misleading.

First of all, the "rating" you refer to where Newton comes third was in reference primarily to features, ease of use, and documentation. At that part of the document, there had been no testing performed to check the correctness, speed, or stability of the engines. The way I read your comment implies this was a "final" rating on the engines evaluated - which is not the case.

Secondly, the paper mentions that there are factors that COULD be tweaked to improve the performance / accuracy of the engines tested BUT they were only used for the joint constraint test. And when this was done, the original "non-tweaked" values were still presented for comparison. Again, reading your post, one would assume that the author tweaked the values to get what the results he wanted without presenting the results of the non-tweaked engine.

While I agree that publishing the code used to evaluate the engines would make the document more valuable to the reader; I do not read the bias you seem to perceive. If there was an "open source" bias as you posit, I am left to wonder why two of the three engines run through the tests were non-open-source, with the best performing engine being Novodex (AGEIA's engine).

--EK
Antonio Martini
Posts: 126
Joined: Wed Jul 27, 2005 10:28 am
Location: SCEE London

Post by Antonio Martini »

Julio Jerez wrote:This is a very interesting comparison indeed.
Since the engine that comes out the worse on that benchmark is Newton, I hope that at least you give me the opportunity to explain a few points.
These are some quotes from the document.
actually the most interesting part i found about the Newton evaluation was the complexity plot in figure 4.17 where it seems that the Newton solver has at least O(n^2) complexity. However i didn't do any regression of the plotted data.

cheers,
Antonio
User avatar
Erwin Coumans
Site Admin
Posts: 4221
Joined: Sun Jun 26, 2005 6:43 pm
Location: California, USA
Contact:

Post by Erwin Coumans »

It is very hard to make a real 'objective' test, especially what you are actually measuring and how you combine your 'rating'.
At the time of the test, 2005, Bullet was still early stages, and my focus was on collision detection. So let's add some more viewpoints to the discussion:

Collision detection features, like proper convex hull and cylinder support seems to be not rated highly. Another aspect is how easy it is to extend the collision detection with new custom shapes. With some engines (like Novodex/PhysX and ODE) extending collision detection with one new shape -type is harder, because almost every entry in the N*N collision matrix is handled as special case. A generic reusable/extensible convex collision detection, not limited to just polyhedra, like GJK has some benefits in my opinion. It can be useful and entertaining to come up with another 'support mapping' to define a convex wheel shape that matches better then a cylinder for example. But I admit I am biased towards a generic flexible solution.

Another aspect that matters in my opinion is the quality of the source code itself. I can see this is not a topic for many of us, but when you tightly integrate the engine with your game, it might become an issue. For example, although it has interesting deformable physics features, OpenTissue would be a no-go for me, just because of the extreme usage of template classes. Novodex/PhysX is mostly a black box, which makes finding certain bugs very tricky, so you rely on their support. I remember from my Havok times, that a lot of the sources were exposed, and some users really benefit from that by extending, optimizing and customizing the physics engine around their game engine.
From that perspective, Bullet is very similar to Havok, and ODE is more similar to PhysX. This aspect tends to be very subjective too, perhaps some coding style conventions are a matter of taste.

It is easier to measure performance, and right now, Bullet constraint solver is less optimized then for example ODE quickstep. ODE has some smart cache-friendly optimizations, whereas Bullet has a lot of random-memory access and redundant calculations inside the inner loop. For example. ODE incrementally updates the lambda and other data, whereas in Bullet's sequential impulse, a full impulse is applied to the rigidbody, including updating its velocity. In my measurement, ODE quickstep in isolation is around 2 to 3 times faster then Bullet at the moment, which I obviously will sort out. Hopefully before another of such comparison paper comes out ;-)

It's best not to flame-war, but just put things into perspective and laugh about it.

Cheers,
Erwin

Julio Jerez wrote:This is a very interesting comparison indeed.
Since the engine that comes out the worse on that benchmark is Newton, I hope that at least you give me the opportunity to explain a few points.
These are some quotes from the document.
Page 35
reveals that Newton looses energy and therefore the angular momentum is also lost.
This is also visible in the simulation, the block will eventually stop moving.

Page 36
Figure 4.5 shows that with ODE, the rotational energy increases in the same way as the
angular momentum. With Newton the rotational energy oscillates and slowly decreases.
Novodex keeps the rotational energy at the initial level. When observing that and the
fact that the error in angular momentum is oscillating it is likely that Novodex only
preserves the value of the angular momentum vector and not the correct values in the
vector. This will ignore the gyroscopic effect.

Page 38
? Newton also gives the object velocity in the opposite direction. One thing that
differ from the simulation with ODE is that the energy and velocity is smaller.
The simulation also shows that the velocity is reduced in each time step which
means it must be damped. When observing the velocity values for each time step
it is found that the velocity is reduced to a factor of 0.9999 every time step. This
behavior is independent of the restitution.

Page 43
In figure 4.12 the position of the pendulum is plotted. The graph shows the position
in the x-dimension which is the interesting dimension since the pendulum is rotating
around the y -axis. The graph shows what was suspected when looking at the energy in
figure 4.11, both Novodex and Newton decrease the pendulum movement. Newton will
eventually stop the pendulum from moving. Compared to the other ODE only looses
little of it?s movement.

....
And so on.
I only want to say the maybe the setting for angular and linear damping were not set to zero on creation of the bodies.
I would say the tests are measuring the default value of K not conservation of momentum.
..................

The other point is that and I quote:
The tests were performed with up to 20 joints in a chain. Novodex behaved well in
the test. When the number of joints were close to 20 some elastic effects could be
seen. Newton behaved very bad in this test, when there were more than three joints in
the scene the simulation wasn?t stable and the spheres moved around in a chaotic way.
It says the Newton cannot simulate three joints in a stable manner
This is a very big surprised to me. I am not claiming Newton is infallible, as I had stated the conditions that makes the engine unstable, And I never claimed that is was faster than any other engine.
However I am fairly sure that the engine can simulate more than three joints, so I went to my site to check some demos that are still downloadable from their websites, and that are using joints with the same version of Newton that was used for that benchmark.

http://www.dave.serveusers.com/oxNewton.html
http://newton.delphigl.de/playground_buggy_hinges.wmv
http://www.delphigl.de/misc/npg_skinned ... tapult.wmv
http://www.gametrailers.com/player.php? ... ov&pl=game
http://www.youtube.com/watch?v=kpJNiEnpCrM
http://www.youtube.com/watch?v=3Z_zxZlgtps
http://www.lri.fr/~devert/videos

You would think some of those demos do show more than 3 stable joints, with different masses ratios. Many of the videos and demos have executables and also runtime code that can be downloaded from their respective websites. Obviously these demos show a big discrepancy with the finding of the academic research.
I could go on but it sufices to say that considering that several thousands people are using Newton,
I would think that an ill formulation of the constraint solver as bad as the paper claim will show up more often with other users regrless of the amount of tweaking the regular users are doing.
Don't you you agree on that or am I wrong?


..............................
KenB wrote: The important result of this thesis is not the ranking, which was made for a special reason, i.e. user integration of a physics engine for Virtools, to be used in VR/Game projects at the Interactive Institute.

Instead, it is really the methodology that is important, with simulation experiments designed to analyze the physical validity of the engine, rather than subjective "plausibility" (hate that word...).
/Kenneth
You said that the test try to prove the validity of the engine, would you agree with me that the experiment is flaw since it does not really measure the conservation of momentum but the damping coefficient added to the integrator?
which in the case of Newton can be set to zero but it was not.

Finally:
Novodex has the highest grade and is the first choice for further tests. One of the
few negative things about this engine is that it is only free for non-commercial use.
ODE is also a very capable engine and completely free since it is open-source. Therefore
ODE was also chosen for the runtime tests. The third engine is Newton Game Dynamics.
The reason that it was chosen is mainly that it is free for commercial use and it
also got a higher grade in the evaluation than True Axis.
The three engines chosen for further testing are:
1. Novodex
2. ODE
3. Newton Game Dynamics
It is almost like it is apologizing to the public for given me a third place and the only thing worth a value in Newton is that it is free.

I really think test like this may be very useful if they were impartial.
Unfortunately I do not think they are impartial at all.

Maybe now the there are so many open source engines, all far and beyond superior to Newton according to the popular opinion of the experts of the establishment.
Perhaps the pool to select from is bigger and better for the next test.

So please I am going to ask very respectfully that unless the conditions of these tests are made public, leave Newton out of the circus.

So please feel free to prove me wrong on this.
Last edited by Erwin Coumans on Tue Dec 05, 2006 5:10 pm, edited 1 time in total.
Julio Jerez
Posts: 26
Joined: Sat Jul 23, 2005 12:56 am
Location: LA

Post by Julio Jerez »

Eternl Knight wrote:Having read the document, Julio, I think you are a little off base in your analysis of the document. Whether this is deliberate or not, I cannot say, but your comments are somewhat misleading.

First of all, the "rating" you refer to where Newton comes third was in reference primarily to features, ease of use, and documentation. At that part of the document, there had been no testing performed to check the correctness, speed, or stability of the engines. The way I read your comment implies this was a "final" rating on the engines evaluated - which is not the case.

Secondly, the paper mentions that there are factors that COULD be tweaked to improve the performance / accuracy of the engines tested BUT they were only used for the joint constraint test. And when this was done, the original "non-tweaked" values were still presented for comparison. Again, reading your post, one would assume that the author tweaked the values to get what the results he wanted without presenting the results of the non-tweaked engine.
--EK
Actually the test were tweaked in the cases where the two other candidate performed worse than Newton. Further more in the cases that Newton perform better than the other two candidates, the experimenter ended the test with a disclaimer:
Page 41
This test doesn?t show the whole truth. There are many more things that are important
when running a simulation with constraints so these results should be read with
the other constraint tests in mind.

A factor to bear in mind is that there are ways to tune the parameters to make the
simulation better, e.g., in ODE you can set an ERP parameter which might make the
simulation more close to a horizontal line.
Here Newton show better tolerance to mass ratios and the experimenter chooses to disregard the test minimizing the importance. He even suggests that some magical ERP might make the 45 degrees ramp and the random energy oscillation typical of any iterative linear system solver disappear. when in fact this oscilations are a intriscic properties of the algorithm as any Eigen Vector analisis will demostrate.
But he insist that the reason is some kludge is not set properlly.
Page 49
All the simulations look good. There is no simulation that behaves like any other. The
pile of boxes may fall in different directions depending on which engine is used.
There are a lot of tricks that can be used to make the pile be more stable, e.g., freeze
different parts of the pile that is not affected by any external forces.
With Novodex the pile falls after 15 boxes, Newton after 18 boxes and ODE after
13 boxes.
Again another test where Newton perform better and again the same disclaimer attribute the behavior to some kind of trick.
The point is that there are not tricks, in fact all of the demos on the SDK ion Newton demonstrate stacking with masses ratio o 20 to 1 and with the auto freeze off.

Now if you compare those test to this on
Page 48
Both Novodex and ODE increases linearly when boxes are added. Novodex is the fastest
of them all, ODE is second best. Newton is slower than the other two and also has a
steeper slope. This means that the computation time per colliding pair increases faster
resulting in a more expensive simulation. The oscillations in Newtons graph could be
explained by instability in the pile resulting in less contact points.
He is happy to immediately jumps to the erroneous conclusion that Newton have a steeper slopes than the other, further more he goes on saying that the oscillation of the graph could explain the instability. But if he new about optimization methods he would know that what the curve is showing is the extra time spend moving variable in and out of active and inactive sets.
He ignored completely the test were Newton showed perfect stability, and he totally confuses the plot of a quadratic curve with the slope of a linear algorithm.

Any fairly season engineer or scientist looking at that plot will immediately deduce that the curve is the tail tale of an algorithm with a quadratic time complexity.
Point in case
AntonioMartini wrote: actually the most interesting part i found about the Newton evaluation was the complexity plot in figure 4.17 where it seems that the Newton solver has at least O(n^2) complexity. However i didn't do any regression of the plotted data.

cheers,
Antonio
The point is that this is not a secret as I had stated that in several occasions. In fact every time a person register in the forum asking for high performance I am the very first one to discourage them and send them to try other open source solution.


????????



You say the rating was base on the features on each engine where base on easy of feature, easy of use and documentation.

Features: do you really believe that ODE and Novodex at the time of the test had more features than Newton? No counting concave mesh-to-mesh collision, I do not think that Newton is missing any feature required by a rigid body simulator.
Please if you think I am wrong please list what those featucre are, because I do not see them in the document.

Easy of Use: I think is safe to said that this is a very suggestive appraisal. However when you read the document you can see how everything is explained in turns ODE settings and documentations, you tell me who would win that round.

Documentation: you speak about documentation and you are probably right Newton is not documented very well. Now here is the point It did not think that he had problem using Newton at all, he just chose not to set the initial conditions of the rigid bodies to have zero damping coefficients on initialization. Notice I am not talking about tweaking Newton I am speaking about setting the initial parameter on creation of the bodies, this is not different that creating a body with a specified mass value.


This publications about Newton and me had been made. I had seen others,
I do not think I seen one that claim that Newton can not simulate three joint thought, have you?
You tell me who is off base.
Eternl Knight
Posts: 44
Joined: Sun Jan 22, 2006 4:31 am

Post by Eternl Knight »

As Erwin has suggested/requested that this thread not devolve into a flamewar, and given the (I feel) "confrontational" tone of Julio's last post. I am bowing out of this thread in an attempt to keep the peace.

I do not agree/concede to all that has been said, but I don't feel I can pursue the debate without emotions rising on both sides (I react strongly when pushed).

Apologies,
EK
Julio Jerez
Posts: 26
Joined: Sat Jul 23, 2005 12:56 am
Location: LA

Post by Julio Jerez »

Oh but it is not confrontational at all, you and perhaps some others around here may take like that for reasons that I do not know.
I am trying to contest or to find an explanation to the incorrect assessment stated against my work.
This publication is also on the net, and that?s fine with me, anybody is entitled to publish any kind of misinformation however erroneous it may be, however for some estrange reason the owners of this forum had decided to endorse it, knowing very well that what is said there is very incorrect, misleading, or at the very least was written by individuals with very poor knowledge of what they where doing.

This is the summary of the more blatant false statements in the paper: (there are many more)
  • 1-Newton does not conserve momentum, when in fact there object have a damping coefficient that can be set to minimum value (and a believe the value was zero at the time of the test).
    2-Newton is so wrong that cannot simulate three stable joint.
    3-Newton is the least feature complete physic engine.
    4-Any test were Newton did better are dismissed as unimportant or attributed to lack of good calibration of the competitors.
I do not think you are the person who wrote the paper, and I know you have a high contempt for non-open source technologies, in particularly for Newton, so maybe you are not a user of it or let us just say you are not in the position to answers my questions. So I really appreciate you decided to state out of this and let KenB or the real author of the paper to answer the points.

After all the paper is saying things that I consider bias, grossly incorrect, and carefully written as to not upset the establishment with the status quote.
Thank you very much for staying out.
KenB
Posts: 49
Joined: Sun Dec 03, 2006 12:40 am

Regarding the conclusions in the physics engine comparison

Post by KenB »

Hi,
I haven't read this thread for some time.

The conclusions from the report should certainly not be taken as the final truth, not from the time the tests were conducted and certainly not now.
They show an attempt at implementing test cases in a number of engines for comparison, and I would expect any test like this to contain mistakes in implementation or errors in parameters, and therefore the tests should be open, and iterated on until everyone is happy (or not...).

The obvious way to do all this, is to design these (and other) scenes using e.g. Collada and specify exactly what should be simulated and then measured, plotted and tabulated and then let people work on getting the most out their respective engine. This should all have been done in Collada in the project, hadn't it been that the task from the researchers at the Interactive Institute was to evaluate a physics engine for integration with Virtools. The students integrated the engines with Virtools and conducted all the testing inside that environment, and therefore the code is rather difficult to reuse. The original task was to evaluate the physics engines without doing any systematic testing, so we suggested a number of cases where we know that e.g. linear solvers can have problems, where oversimplified friction models show up, and where constraint violations become obvious etc.

I was involved in designing these test cases, but certainly not even close to looking at the actual implementation or code, which I will not defend.
The only thing I defend is the principle of using a well thought out test battery for testing a physics library.

Most examples are easy enough to implement though, so if anyone feel like doing it in e.g. Collada, that would be great! In addition, I expect that we will release a battery of at least some 10-12 test cases during this year, for stress testing physics libraries beyond "plausibility" (still hate that term) of blow-em-up and debris.
User avatar
Erwin Coumans
Site Admin
Posts: 4221
Joined: Sun Jun 26, 2005 6:43 pm
Location: California, USA
Contact:

Post by Erwin Coumans »

I am happy to assist with the COLLADA Physics side of things, and collaborate on this with you. I'll get in touch with you privately to discuss some details.

Thanks for getting back on this!
Erwin
Dirk Gregorius
Posts: 861
Joined: Sun Jul 03, 2005 4:06 pm
Location: Kirkland, WA

Post by Dirk Gregorius »

for stress testing physics libraries beyond "plausibility" (still hate that term) of blow-em-up and debris.
I don't get this point. AGEIA, Havoc and also Bullet main purpose is physical simulation in games. You will sell no more licenses of your engine if it is more physical correct than another. The most important things that count are plausibilty, robustness, speed, and ease of use. If it is then physical correct this is nice if not it really doesn't matter.

I suggest reading: M. Blum: Using Dynamics in Disney?s Production Environment

A quote from the slides:
"Don't want "real" physics but "animation" physics."

You can find it in the Baraff rigid body Siggraph presentation from 1997.


I agree that well defined testbeds are a nice thing to have, but physical correctness is not the major criterium. This totally misses the point.
Antonio Martini
Posts: 126
Joined: Wed Jul 27, 2005 10:28 am
Location: SCEE London

Post by Antonio Martini »

Dirk Gregorius wrote:
for stress testing physics libraries beyond "plausibility" (still hate that term) of blow-em-up and debris.
I agree that well defined testbeds are a nice thing to have, but physical correctness is not the major criterium. This totally misses the point.
to me "physical correctness" is one among the various criteria that should be tested.

if a stress test is meant to help people to choose the most suitable physics engine for their purposes, the more information is available the better.

It can actually save a lot of time and surprises to know in advance the major weak points of each engine. Everybody is still free to discard the information that isn't important to them.

Given that physics engines are in continuous development a single static stress test would not have a very long lasting value.

cheers,
Antonio
Dirk Gregorius
Posts: 861
Joined: Sun Jul 03, 2005 4:06 pm
Location: Kirkland, WA

Post by Dirk Gregorius »

I agree that it should be tested among other various criteria, but I wouldn't consider a less physical correct physic engine the weaker one. Actually from my experience correct physic often doesn't look good or doesn't give you the expected results (of artists or designers). So a phyisc engine that creates the expected visual results can not be considered the weaker one.

Cheers,
-Dirk
Etherton
Posts: 34
Joined: Thu Aug 18, 2005 6:27 pm

Post by Etherton »

I think that as long as all criteria are included in the test scenes, then it is irrelevant whether they test correctness or not, because it will be embedded in the input data. A particular API will get a score that can be low or high on scenes that emphasizes the correctness aspect.
It will be like a SPEC for physics if you will, it will be up to the end user to go for what ever engine they like.
In addition the score can add things like, open source, documentation, hardware support, native Collada support, feature set completeness, and so on.
That way, for example, end users for whom access to source is the more important aspect, can select among the open source engines with the highest score, but users with a need for better physics correctness can go for the engine with the better score in tests focusing in correctness.
Dirk Gregorius wrote:I agree that it should be tested among other various criteria, but I wouldn't consider a less physical correct physic engine the weaker one. ..Cheers,
-Dirk
Um that does sound rather estrange, what would you consider the strongest point then?
Post Reply