[ODE] Floating point error propagation

Jeff Shim necromax at kebi.com
Wed May 21 21:53:02 2003


4YA_ 8^=CAv@T4O4Y.

--kebi163.152.39.79.262021053579206131
Content-type: text/plain; charset=euc-kr
Content-Transfer-Encoding: 8bit


In the case of recursive neural network, signals are recursively multiplied or summed for more than thousands times so the errors are propagated at last. 

Although I used double precision, I could not get exact result again. 

Is there any methods or options to avoid this?

Maybe it would be the essential problem of FPU. 


<center>Free webmail hosting service MNARA.NET<br>-- This e-mail was sent by FREE KEBI Mail at http://kebi.com/ --</center>

--kebi163.152.39.79.262021053579206131
Content-type: text/html; charset=euc-kr
Content-Transfer-Encoding: 8bit

<BR>
In the case of recursive neural network, signals are recursively multiplied or summed for more than thousands times so the errors are propagated at last.<BR>
<BR>
Although I used double precision, I could not get exact result again.<BR>
<BR>
Is there any methods or options to avoid this?<BR>
<BR>
Maybe it would be the essential problem of FPU.<PRE><font size=2 color=7080aa>

</font></PRE>
<A HREF=http://www.kebi.lycos.co.kr><IMG SRC=http://www.kebi.lycos.co.kr//k/logo/ode@q12.org/necromax.262021053579206234 BORDER=0></A>&nbsp;
<center>Free webmail hosting service MNARA.NET<br>-- This e-mail was sent by FREE KEBI Mail at http://kebi.com/ --</center>

--kebi163.152.39.79.262021053579206131--