Floating point optimizations

IEEE754 compliance ensures that primitives such as +, -, *, /, sqrt, rem, fma return the exact same results and aren’t transformed or reassociated without an explicit annotation like @fastmath or @simd or the use of muladd (for example, a*b becoming a*(1/b), (a+b)-c becoming a+(b-c), or a*b+c becoming fma(a,b,c)). As others have said, --math-mode=ieee at the command line disables these annotations globally (except maybe muladd? not sure on that one).

IEEE mode does not guarantee that functions like sum won’t have input- or architecture-dependent association orders (although foldl(+,itr) should be a consistent but slower alternative) or that non-primitive functions like exp, sin, or ^ will return the same results.

So for low-level calculation with a very limited set of primitive operations, IEEE mode should provide reproducible results. For more elaborate calculations involving more complicated functions or array-level operations, it will likely not. For general scientific computing, you probably don’t care. If you’re trying to write a precision implementation of (sin(x)-x)/x^3, you probably do care.

The default behavior is to use IEEE754 compliant math. You shouldn’t need to use --math-mode=ieee except if you’ve used annotations but want to disable them because you’re worried they’re causing trouble. @fastmath can introduce all sorts of insidious corner-case bugs so I’ll generally recommend against it. Most of the boost that @fastmath can deliver can usually be accomplished by manually reorganizing your expression (although sometimes this can be tedious). @simd is safer because it has a narrower use case - mostly to permit a reduction (like you’d see inside a sum loop) to be reordered.

2 Likes