A faster pow(x,1/12) function available in Fast12thRoot.jl


New to Julia, so hopefully I’ve chosen the correct category for posting this.
Actuaries use x^(1/12) very often in order to convert annualized numbers to their monthly compounded equivalent. I had an itch that had to be scratched and that was to see if there was a faster calculation than that underlying the power function. Using Remez.jl I was able to find a rational polynomial to x^(1/12)-1 on the interval [0.5,1.0) resulting in a x^(1/12) approximation that appears to be both fast and accurate, even compared to cbrt(sqrt(sqrt(x))).
I’m an actuary rather than a numerics expert so not 100% sure that I’ve missed something obvious.
Therefore looking for someone to review/comment if possible.



This is another rational polynomial approximation to (x^(1//12) - 1) for x in 0.5:1.0.

ratpoly_numerator_coeffs =
	[ 0.1407373980457154e-3,
          0.24598051459861557e-2 ]

ratpoly_denominator_coeffs =
	[ 0.22355385157150939e-3,
          0.16406842123662818e-2 ]


That’s awesome!

You mentioned that you amended Remez.jl: care to share what you did? I haven’t looked at it for a while, but pull requests are welcome.


Hi Jeffrey,
Thanks for taking a look.
I believe that my initial coefficients already resulted in an absolute error < 0.5eps in [0.0, 1.0].
There is an accuracy.jl script included in the package which buckets errors vs big(x)^(one(BigFloat)/big(12)) for a specified interval and number of random numbers.
I’m also using a lower degree poly in the denominator so will be more performant.

I’m obviously missing something but I don’t get the stated accuracy using the code snippet you included.
For example for x=0.9090909090909091, your code gives an abs err = 2.2640350239970713e-16, and fast12throot gives 4.35889747467582e-18.
From what I can tell your hornerpoly function is for x^12 in [0.5,1] rather than x^12-1. Is that correct?
I had found that the Float64 coeffs for x(1/12)-1 were more accurate than those for x^(1/12).

horner_numerator(x) = (0.0001407373980457154 + (0.010909742736020892 + (0.14823153004176548 + (0.6117674724014694 + (0.9022845211608465 + (0.4782192944649025   + (0.07872193791007243 + 0.0024598051459861557 * x) * x) * x) * x) * x) * x) * x)
horner_denominator(x) = (0.00022355385157150939 + (0.014313304914450771 +  (0.17395352028835623 + (0.656615590039731 + (0.8906672552104548 +  (0.43174893535587344 + (0.06357219738630518 + 0.0016406842123662818 * x) * x) * x) * x) * x) * x) * x)
hornerpoly(x) = horner_numerator(x) / horner_denominator(x)

for x in linspace(0.5, 1.0, 100)
    v0 = big(x)^(one(BigFloat)/big(12))
    v1 = fast12throot(x)
    println(x, ": ", Float64(v0-hornerpoly(x)), ", ", Float64(v0-fast12throot(x)))



Hi Simon,

I think the changes are too minor for a pull request - I was just hacking around in my search for an accurate approximation. But I felt it was the right thing to do a hat-tip to your packages.

I included code to halt after a certain number of iterations since the max error would bounce between BigFloats and never terminate.
I also included the calculation of maximum error for the equivalent rational polynomial but with the BigFloat coefficients converted to Float64.
In addition I also validated results obtained against another Remez program in C.

High-level methodology:
It doesn’t seem possible to have an accurate, lowish-degree rational polynomial approx over [1/4096,1].
The interval [0.5,1.0] is convenient for fp calculations, and the function x^(1/12) is well-behaved on that interval.
Can’t use Newton iteration with a low-degree poly because will lose accuracy when calculating x^11 or x^12 unless you use double-doubles. But that will have too high a performance impact.
So require an accurate poly over the interval when using Float64 arithmetic only - hence the inclusion in Remez.jl.
Targeting x^(1/12) directly results in loss of accuracy after converting coeffs to Float64 (and rescaling), so look for a simple transformation that does give required accuracy but will remain performant.
x^(1/12)-1 seems to fit the bill with the benefit of being exact when reversing the transformation if we re-order the calculation a little and use the double-double components of 2^(ii-1)/12 obtained using your DoubleDouble.jl package.



( egad – almost never copy and paste incorrectly : ) I will take down the specifics.

Your work is done well. With the understanding that you may know all this already, let me share three things I noticed while playing around with minimaxs.

It is usually helpful to run Remez routines using bounds that are fractionally wider than the target domain (widening into negative values may be unhelpful, depending on the function). Given two approximations with the same number of coeffs, if one has large and small coeffs and the other has more consisitent values, usually the second will behave no worse – and may be smoother in low order bits for some subsections of the domain.

Some of the approximations that had one more coeff in the numer than the denom or vice-versa, while still accurate, tended to either slightly overestimate or slightly underestimate many values. I cannot recall if that occured with the x^(1//12) or the x^(1//12)-1 approx.

Thank you for the elegant and careful work.