[ODE] Cross-platform woes

Keith Johnston keithj_pinoli at yahoo.com
Sun Aug 21 16:18:56 MST 2005


I have made some progress that I thought I would
report.  By using double precision in ode, and then
rounding to float before each time step, the
simulations on the Mac and the PC are much, much
closer.

The problem now seems to be the OPCODE library - it
uses single precision.  If I could get it to use
double precision and then do the same rounding back to
float I think I might have a "good enough" situation.

BTW, I found a simple test program that produces
different results on the Mac and PC - NOTHING I tried
would make this program print the same results on both
machines!  I think there is a fundamental difference
in the processor.

If someone can find a way to make this program produce
the same answer on the PC and Mac, this might help me
as well.

#include "stdio.h"
#include <math.h>


main()
{
   float b[3],c[3];

   b[0] =0.0000476837158203125;
   b[1]=0.00004762411117553711;
   b[2]=-0.5605297088623047;

   c[0]=0.000034332275390625;
   c[1]=-0.5604821443557739;
   c[2]=-0.5604686737060547;

    float f1=b[1]*c[2] - b[2]*c[1];

    printf("0x%08X\n", *(int *)&f1);
}

On the PC, the answer is: #include "stdio.h"
#include <math.h>


main()
{
   float b[3],c[3];

   b[0] =0.0000476837158203125;
   b[1]=0.00004762411117553711;
   b[2]=-0.5605297088623047;

   c[0]=0.000034332275390625;
   c[1]=-0.5604821443557739;
   c[2]=-0.5604686737060547;

    float f1=b[1]*c[2] - b[2]*c[1];

    printf("0x%08X\n", *(int *)&f1);
}

On the PC, the result is  0xBEA0DDFB
On the MAC, the result is 0xBEA0DDFC

If I change b and c to be doubles, then round f1 to a
float, then the result is 0xBEA0DDFB, which makes me
think the PC is correct.

I compiled the PC version like this: cl /Op test.cpp

(improve floating point consistency)

The mac: g++ -ffloat-store and -mno-fused-madd
test.cpp

(dont store floating point in registers, don't use the
fused multiple-add operations)

Keith

--- Charlie Garrett <charlie.garrett at gmail.com> wrote:

> 
> Intel floating point units tend to keep extra bits
> of precision in  
> their registers, with those bits being discarded
> when a value is  
> written to memory.  Your compiler should have a
> setting for strict  
> IEEE floating point compliance, so check that it is
> on for the PC.
> 
> I have both a Mac and a PC, but I have never tried
> running ODE on the  
> PC.  If you continue to have problems, maybe I can
> look into it further.
> 
> --
> Charlie Garrett
> 
> >
> > 15 aug 2005 kl. 06.33 skrev Keith Johnston:
> >
> >
> >> Having finally conquered repeatability on the PC
> from
> >> one simulation run to the next, I am now trying
> to
> >> make simulations on the PC and Mac run
> *identically*.
> >> I have tried truncating the physical properties
> to 2
> >> decimal places, but this has not solved the
> problem.
> >> The same initial conditions on a simulation on
> the PC
> >> and the Mac still diverge.
> >>
> >> Perhaps the only solution is to give up on Visual
> C++
> >> and use gcc on windows - that way at least I'd be
> >> using the same compiler on both operating
> systems.
> >> Currently we are using Xcode on the Mac and
> Visual
> >> Studio on windows.
> >>
> >> Has anyone ever gotten exactly the same
> simulation
> >> results on both Mac and PC?  How did you do it?
> >>
> >> Thanks,
> >> Keith
> >>
> >>
> >>
> >>
> ____________________________________________________
> >> Start your day with Yahoo! - make it your home
> page
> >> http://www.yahoo.com/r/hs
> >>
> >> _______________________________________________
> >> ODE mailing list
> >> ODE at q12.org
> >> http://q12.org/mailman/listinfo/ode
> >>
> >>
> >>
> >
> > _______________________________________________
> > ODE mailing list
> > ODE at q12.org
> > http://q12.org/mailman/listinfo/ode
> >
> 
> 



		
____________________________________________________
Start your day with Yahoo! - make it your home page 
http://www.yahoo.com/r/hs 
 


More information about the ODE mailing list