What's going on with exp() and --math-mode=fast?

No, the fact is that the compiler changes the global CPU setting:
https://twitter.com/vchuravy/status/1216788090079076354

I’m aware - I’m arguing that the compiler should insert code switching that flag to whatever it was when the function was called when returning from those library functions :slight_smile: The flag being CPU global doesn’t help things of course, but I’d rather not discuss in this thread how modern CPUs aren’t/shouldn’t behave like a PDP-11 on steroids :smiley:

I’ve submitted an issue with a proposed (naive) solution of forcing the subtraction with llvmcall:

I do want to point out that if your wasn’t written with any particular order in mind, @fastmath will often be more accurate, e.g. it will use fma instead of * followed by + on computers with fma-support, which incurs fewer rounding steps.
Or, using the problematic example here, 1.0 * 369.3299304675746 + 6.755399441055744e15 - 6.755399441055744e15 == 369.3299304675746 if rounding weren’t a problem. So in some sense, the fastmath version – by returning 369.3299304675746 instead of 369.0 – is more accurate here.
However, 369.0 was the intended result.
The exp code was deliberately written with floating point behavior and rounding in mind. The IEEE version does precisely what the author intended.

So I would suggest that if you didn’t have specific intentions about order or rounding behavior – e.g. the order you specified was picked out of all possibilities out of convenience – then switching from one arbitrary order to another specified by @fastmath will increase accuracy more often than not.

But when these were chosen deliberately, i.e. not arbitrarily, results can be catastrophic for accuracy. Starting Julia with --math-mode=fast:

julia> x = randn(10)'
1×10 adjoint(::Vector{Float64}) with eltype Float64:
 0.411952  -1.20621  -0.780723  0.69826  -1.60801  -1.88011  0.363784  0.663141  -0.874339  0.613377

julia> sinpi.(x)
1×10 Matrix{Float64}:
 0.0  -0.0  -0.0  0.0  -0.0  -0.0  0.0  0.0  -0.0  0.0

julia> cospi.(x)
1×10 Matrix{Float64}:
 1.0  1.0  1.0  1.0  1.0  1.0  1.0  1.0  1.0  1.0

I think it’d be nice to use fastmath in some more places, e.g. in (add/mul)_fast(::ForwardDiff.Dual, ...) to optionally avoid some of the multiply-by and add-zeros from ForwardDiff that require some of fast flags to actually eliminate.
But it shouldn’t be applied to code written by others who may have implemented algorithms requiring specific sequences of operations.

Global fastmath is pretty bad IMO.
Which is part of the reason I’m somewhat encouraging applying it (or the safer alternative: @muladd) more locally: get rid of the incentive to use it globally by having the places that can use it safely already benefit from fma instructions.

EDIT: I think @muladd and @simd should be preferred over @fastmath, as the two of them cover the most important optimizations enabled by @fastmath with less of the safety concerns (e.g., handling of NaNs).

7 Likes

fix `exp` in `--math-mode=fast` by oscardssmith ¡ Pull Request #41600 ¡ JuliaLang/julia ¡ GitHub should fix it.

3 Likes

It was approved to disable the (global) --math-mode=fast: https://github.com/JuliaLang/julia/pull/41607#issuecomment-897887059

and I have the PR for it (that one is closed, but still the valid one, it only needs to be merged, and it’s taking some time, maybe since it’s the closed one?):

https://github.com/JuliaLang/julia/pull/41638

Hi, I don’t know if this is related or fixed already. I tracked down an issue (https://github.com/JuliaRobotics/IncrementalInference.jl/issues/1382) in julia 1.7rc1 to a function that uses @fastmath in https://github.com/JuliaRobotics/KernelDensityEstimate.jl .

for julia 1.7.0-rc1

julia> versioninfo()
Julia Version 1.7.0-rc1
Commit 9eade6195e (2021-09-12 06:45 UTC)
Platform Info:
  OS: Linux (x86_64-pc-linux-gnu)
  CPU: Intel(R) Core(TM) i5-7300HQ CPU @ 2.50GHz
  WORD_SIZE: 64
  LIBM: libopenlibm
  LLVM: libLLVM-12.0.1 (ORCJIT, skylake)
Environment:
  JULIA_EDITOR = code
  JULIA_NUM_THREADS = 4

julia> foo(x) = @fastmath exp(-0.5*x)
foo (generic function with 1 method)

julia> foo(2835.0)
-7.888656068042933

julia> exp(-0.5*2835.0)
0.0

for julia 1.6.3

julia> versioninfo()
Julia Version 1.6.3
Commit ae8452a9e0 (2021-09-23 17:34 UTC)
Platform Info:
  OS: Linux (x86_64-pc-linux-gnu)
  CPU: Intel(R) Core(TM) i5-7300HQ CPU @ 2.50GHz
  WORD_SIZE: 64
  LIBM: libopenlibm
  LLVM: libLLVM-11.0.1 (ORCJIT, skylake)

julia> foo(x) = @fastmath exp(-0.5*x)
foo (generic function with 1 method)

julia> foo(2835.0)
0.0

julia> exp(-0.5*2835.0)
0.0
1 Like

I get the same issue with Julia 1.7-rc1 while 1.6.3 works fine.

This is my fault. I had a PR that fixed it, but I forgot about it. Will remake today and get it merged before 1.7.0

7 Likes

Thanks, the tests are now passing on the julia nightly build.

1 Like