Apart from a little inaccuracy, seems like “2.718281828459045 ^” is much faster than “exp()”. Similarly, “2.0 ^” is faster than “exp2()”, but exp() is particularly slow:
Sort of “related” to this (and it’s Friday, so cut me some slack. ).
Computing log10(x) using log(x)*log10(exp(1)), with log10(exp(1)) replaced by the resulting “numerical value”, is faster than log10(x) according to the benchmark tools (if I’m using them properly?). Obviously not producing exactly the same value, but sometimes speed trumps accuracy… Julia 1.0.0/Windows 7.
Yes, this is something that is useful to keep in mind. Generally, if a function is called something like log10 (or exp or …) it will be implemented as fast as possible, but within the constraints of being within 1ulp of the “true solution”. That is, not quite correctly rounded, but quite close still. I’m not sure what the worst case error is for your f1, but I’m certain that it is not 1ulp.
That said, we do have a @fastmath macro, and there’s certainly more we could do with that, to actually make it choose faster less accurate versions for all the elementary functions.
I’m using JuliaPro 0.6.3 … and somehow there is no help on @fastmath ???
help?> @fastmath
No documentation found.
Base.FastMath.@fastmath is a macro.
# 1 method for macro "@fastmath":
@fastmath(expr::ANY) in Base.FastMath at fastmath.jl:127
so, starting julia --math-mode=fast would put all maths in fastmath mode?