Surprising @fastmath behavior

Here’s something that caught me off guard. Example is reduced to trivial code, but I caught this in a much more complex context.

function foo(a, b, c)
    @fastmath x = a * b + c
    isnan(x) ? 0.0 : x
end
code_native(foo, (Float64, Float64, Float64); debuginfo = :none, syntax = :intel)

Outputs:

	vfmadd213sd	xmm0, xmm1, xmm2 # xmm0 = (xmm1 * xmm0) + xmm2
	ret

For reference, without @fastmath:

	vmulsd	xmm0, xmm0, xmm1
	vaddsd	xmm0, xmm0, xmm2
	vcmpordsd	xmm1, xmm0, xmm0
	vandpd	xmm0, xmm1, xmm0
	ret

So @fastmath doesn’t just affect the computation of x (enabling fma in this case), but it attached some metadata to x to say its value can never be NaN. I always assumed @fastmath would apply to the expression only, and not subsequent assumptions the compiler could make, but is this what’s expected?

A few notes

  • In the real use case where this came computation and NaN check are two separate function that happen to be inlined in a parent scope. For this to fail in this manner, the inliner has to decide to include both parts, which makes it a fickle bug to discover.
  • Yes, I know @fastmath comes with big warning labels and all bets are off. I just want to know if this is intended in the design of it.
  • Yes, I know in this case I can use muladd(), it’s a reduced example.
6 Likes

According to the docs
https://llvm.org/docs/LangRef.html#fast-math-flags
@fastmath enables the nnan flag, which allows for optimizations to assume the arguments and result are not NaN.

With this in mind I would say that what you encountered is to be expected.

If you use muladd you get the correct behaviour.

function foo(a, b, c)
    x = muladd(a,b,c)
    return isnan(x) ? 0.0 : x
end

code_native(foo, (Float64, Float64, Float64); debuginfo = :none, syntax = :intel)

outputs

vfmadd213sd	xmm0, xmm1, xmm2 # xmm0 = (xmm1 * xmm0) + xmm2
vcmpordsd	xmm1, xmm0, xmm0
vandpd	xmm0, xmm1, xmm0
ret
3 Likes

I’m aware it enables nnan for the expression, and that’s what in many cases I want (e.g., for min/max), and yes I know I can use muladd to achieve the specific effect in this specific code. My only question is whether it’s intended for @fastmath to attach assumptions to resulting variables and subsequent computation outside the scope of its expression.

– EDIT –
Reading the LLVM docs closer, it’s free to assume the results are not NaN or produce a poison value, so if Julia @fastmath is a direct translation to the LLVM decorator, that makes sense. Thanks for pointing out the link.

This was the point I was trying to make.