[ODE] CULLIDE vs OPCODE

Sam Hale thestormrunner at yahoo.com
Tue Aug 17 23:22:23 MST 2004


First of all, I'd like to say that I appreciate you stating your opinions, 
as they help me guage the extremes of the group.

BeginRant(){
         The problem is this: Even though most of the equations for 
computational physics have been around for years, the only ones to 
incorporate them into a hardware level are academics and a few government 
contractors. The unwashed masses have yet to see a mass marketed product. 
These same masses are paying $500+ for graphics chips and other singular 
components on a regular basis. Yet no one has made a physics chip for that 
market. In the stead of such a device, a GPU is the closest alternative. I 
do not believe using a GPU as a collision detection processor is in any way 
elegant, but it may get the job done until other components are available.
}

The bandwidth question is the hardest for me to predict. Theoreticaly, a 
system with two PCI-extreme x16 ports could handle two GPUs. This would 
allow one to paint the screen while the other helps take the load of 
geometric transforms for collision detection away from the main CPU. That's 
my plan, anyway. :)

-Sam H

At 11:23 PM 8/16/2004, you wrote:
> >>I think using a ATi x800 to do collision detection would be sweet, 
> don't you?
>
>Since you solicited opinions, I guess I will offer one.  No, I don't think 
>it would be 'sweet'.  I think it stinks of  a grotesque and horrible 
>hack.  Let GPUs continue to do what they are good at, which is streaming 
>vertices and pixels.  Warping them to solve problems they were not 
>designed to solve, by placing collision data into texture assets and frame 
>buffers, seems horribly inappropriate to me.  A Frankenstein approach to 
>problem solving.
>
>"Look Ma, isn't it cool I figured out how to use my toaster oven as a snow 
>blower?"
>
>Once you start consuming bandwidth on your GPU to execute algorithms it 
>wasn't designed to do, stealing precious resources needed for high poly 
>density meshes and pixel processing, I find it hard to believe it is a 
>'winning' scenario.  Instead, focus on optimizing your cache-cohrent 
>mult-threaded CPU based algorithms to work in perfect parallel concert 
>with the GPU to gain maximum throughput with minimum stalls.
>
>What you think sounds 'cool' sounds like a horrific hack to me.




More information about the ODE mailing list