Of the two versions in the github repo, I have only implemented the “faster” version, which is very crude, definitely not accurarte to 7 digits.
I doubt the “fast” version is that accurate either. In the test folder of the repo, there are tests run for the power function against random values with a target of 1e-4 relative error.
The problem with these approximations is not that they’re faster than base, that we know. The base standard library demands that these function are within an <= 1 ulp for all possible input values. Guaranteeing this is the case AND making these function perform as fast as possible is an incredibly hard thing to do. The proposed approximations are completely valid as long as you are aware that they are not accurate to 1 ulp over the full float range.
When you use a^b it will be slower than exp(b), because the function
a^b has to make those guarantees for ALL
b, whereas for
exp(b) we can specialize the algorithm for the fact that
a is now a constant
If you are writing a general library where you can’t guarantee inputs are within a specified range or don’t understand the magnitude of the resulting errors by these crude approximations, in general, it is recommended to stick to the base library’s elementary functions to avoid unforeseen issues from user inputs.
Interesting. I got similar patterns on two linux machines:
@btime directexp(a, b, c)
63.230 μs (0 allocations: 0 bytes)
@btime cobexp(a, b, c)
27.548 μs (0 allocations: 0 bytes)
However, if I use @fastmath for
@btime fastmath_directexp(a, b, c)
21.457 μs (0 allocations: 0 bytes)
My julia is compiled from sources. Version info:
Julia Version 1.4.2
Commit 44fa15b (2020-05-23 18:35 UTC)
OS: Linux (x86_64-pc-linux-gnu)
CPU: Intel(R) Xeon(R) CPU E5-2680 v4 @ 2.40GHz
LLVM: libLLVM-8.0.1 (ORCJIT, broadwell)