As I understand it @fastmath explores whatever functions you write and replaces them with fast and rough versions of these functions. For instance rather than the proper exp function you get a faster and less accurate version of exp substituted into the code.

The speed benefits seem huge. In the below trivial test case I get about 5-10 times the speed (and using the @simd, @. or @inbounds macros did not give any substantial speedup) and there is no substantial accuracy loss. It seems too good to be true though. Has anyone seen large error accumulation in large codebases using @fastmath? When the performance tips says it may change numerical results does this just mean it will fail regression tests with small tolerances or can the errors be material? I guess materiality depends on what you are calculating but for most cases I guess people are happy with 5 significant figures (and are calculating at a scale much larger than machine epsilon).

```
using Random
function do_complex_calculations(a::AbstractArray, b::AbstractArray, c::AbstractArray)
len = length(a)
result = zeros(len)
for i in 1:len
result[i] = (a[i]^7 + a[i]^6 - 100.3*b[i])/sqrt(tan(a[i]) + sin(a[i]) + c[i]^2 + 104.32)
end
return result
end
function do_complex_calculations_fastmath(a::AbstractArray, b::AbstractArray, c::AbstractArray)
len = length(a)
result = zeros(len)
for i in 1:len
@fastmath result[i] = (a[i]^7 + a[i]^6 - 100.3*b[i])/sqrt(tan(a[i]) + sin(a[i]) + c[i]^2 + 104.32)
end
return result
end
rows = Int(1e7)
a = rand(rows)
b = rand(rows)
c = rand(rows)
@time results1 = do_complex_calculations(a,b,c)
@time results2 = do_complex_calculations_fastmath(a,b,c)
sum(abs.(results1 .- results2))
# 2.823381455333576e-9
```