[ODE] Future of physics processors and IM API
Kenneth Bodin Holmlund
holmlund at hpc2n.umu.se
Sun May 15 09:45:53 MST 2005
Hi,
I keep thinking how different interactive physics is from what 3D
graphics has ever been,
except maybe lately, with shaders. Interactitve physics is a moving
target, and algorithms
are far from mature. Game physics engines handle massive stacking in a
highly approximate
way, and have problems even with the simplest machineries. Systems with
kinematic loops,
realistic friction and jamming cannot be handled at all yet. Convergence
is very poorly
understood, and algorithms are not time deterministic if errors are
constrained, e.g.
Gauss-Seidel and Rattle&Shake methods converge fairly well for a bit,
but rarely provide you
with the correct answer - in fact they typically converge very slowly
to the wrong answer!
Such algorithmic problems were really never the case for 3D graphics,
which was algorithically
well defined (at least since the Z-buffer was established) early on, and
stable for a decade.
Interactive physics is extremely state dependent, whereas 3D graphics
mainly depended on
number of triangles and fill rate and not much on connectivity or
dynamic state.
This difference certainly also complicates scalability and it isn't
quite clear how one should prepare
an app to exploit 50 times the physics simulation performance,
especially when it mainly
just gives you larger systems with even larger and uncontrollable
errors! This is a major problem
I think!
Though I don't know much about the PhysX chipset from Ageia it is quite
obvious that it must be a
vector chipset that exploits the leap that such processors give this
year. Xbox360 and PS3 crunch
numbers at 1 teraflop with fantastic price/performance, as do related
technologies for GPU:s, and
e.g. MDGRAPE-3 for molecular dynamics. I question whether PhysX really
will give so much better
performance than e.g. cell technology, which is much more general
purpose. If not, you will be left with
a technology that provides a highly approximate and ill-defined solution
for large systems, and methods
you cannot influence because they are proprietary and because the
hardware is limited.
Parallelism and vectorization is coming now broadly also for general
purpose consumer omputing and with
graphics also becoming more and more general purpose (shaders), it is
not entirely obvious that even graphics
cards will have a bright future in a 5-7 yr perspective, in particular
not in mobile devices. Time-to-market and
business model, including stable developer and consumer platforms, is
absolutely everything though, and hard
to predict, so I don't dare any predictions just from technology arguments.
I think the most important steps are still to be taken at the
algorithmic level, while there will be a plethora
of roughly equivalent technologies that will give you 50-100 times the
performance you get from an ordinary
Pentium today - whatever type of number crunching you are into.
/Ken
>In my opinion a better analogy is between NovodeX and Performer(or some
>other retained mode graphics API). What I would like to see is an
>equivlent to Glide or OpenGL for physics. ie dealing with buffers full
>of physics state and issuing immidiate mode commands to the hardware.
>
>I am not so concerned with if this is a proprietry API (like Glide) or
>an open standard such as OpenGL.
>
>Presumably the Novodex guys have some sort of internal API (however
>poorly defined) for communicating with the hardware, this is what I
>would like to see exposed really. I expect it is similar to glide, ie
>very much tied to the particular hardware architecture, but it would be
>a start and a more solid/general API could be built up from that.
>
>As for Ageia being in an uncomfortable/difficult position, no argument
>here. The mind boggles at how tough it is to debug a hardware physics
>simulator, tracking down errors in software is difficult enough:-(
>
>David
>
>
>
>
More information about the ODE
mailing list