Floating point optimizations

Running julia with --math-mode=ieee is somehow equivalent to compiling a c++ project in MSVC with /fp:strict?

yes, it overrides all @fastmath

1 Like

Note that as of 1.8 (or 1.9 I forget which) --math-mode=fast has been disabled (it’s now a noop) and before then, you shouldn’t use it.

1 Like

Thank you guys for the answers!

To be more specific, if I run the same computations on two different architectures, does --math-mode=ieee guarantee that I will have the same results?

it does not. Some operations like muladd can be compiled in different ways (either a fma or a multiply and add), and things like multithreading or random number generation can also lead to different results between different computers or different runs on the same computer.

IEEE754 compliance ensures that primitives such as +, -, *, /, sqrt, rem, fma return the exact same results and aren’t transformed or reassociated without an explicit annotation like @fastmath or @simd or the use of muladd (for example, a*b becoming a*(1/b), (a+b)-c becoming a+(b-c), or a*b+c becoming fma(a,b,c)). As others have said, --math-mode=ieee at the command line disables these annotations globally (except maybe muladd? not sure on that one).

IEEE mode does not guarantee that functions like sum won’t have input- or architecture-dependent association orders (although foldl(+,itr) should be a consistent but slower alternative) or that non-primitive functions like exp, sin, or ^ will return the same results.

So for low-level calculation with a very limited set of primitive operations, IEEE mode should provide reproducible results. For more elaborate calculations involving more complicated functions or array-level operations, it will likely not. For general scientific computing, you probably don’t care. If you’re trying to write a precision implementation of (sin(x)-x)/x^3, you probably do care.

The default behavior is to use IEEE754 compliant math. You shouldn’t need to use --math-mode=ieee except if you’ve used annotations but want to disable them because you’re worried they’re causing trouble. @fastmath can introduce all sorts of insidious corner-case bugs so I’ll generally recommend against it. Most of the boost that @fastmath can deliver can usually be accomplished by manually reorganizing your expression (although sometimes this can be tedious). @simd is safer because it has a narrower use case - mostly to permit a reduction (like you’d see inside a sum loop) to be reordered.