AbstractFloat math (Is anybody already doing this?)


I’m planning an open-source (MIT license) project to do in my spare time, but I thought it might be a good idea to check with the community first, in case anybody is already doing something similar.

My idea is to write a package AbstractFloatMath (working title) of pure-Julia routines for common mathematical functions (sin, log, sqrt, …) that will take AbstractFloat arguments, i.e. work with any user defined type that is <: AbstractFloat.

For a new floating-point type, it will be enough to implement a minimal set of functions: +, *, unary -, precision, converst to Float64, convert from BigFloat (which doesn’t have to be fast) and a few more. The AbstractFloatMath package will then provide mathematical functions that use only functions from the minimal set, functions that it defines itself, and some basic Float64 functions.

Through the magic of Julia type inference, this could provide fast math functions for any number of user-defined floating point types.

Wrap and inherit Number

This is a nice idea, but don’t underestimate the difficulty of doing that in a performant and accurate way.

There have been several people working in this direction “just” for Float64, including one current Google Summer of Code student. Unfortunately it’s a very hard problem.


I’m aware of this. The goal is not to provide the best implementation - only a reasonably good one.
In practice I will focus on performance for my fork of DoubleDouble, but with the way Julia works it seems to make sense to write for AbstractFloat unless there’s a reason to be more specific. (There seems to be a number of arbitrary-precision types in the wild.)

Also, the idea is that new types can override some of the basic functions, and this will speed up the more complex functions that rely on them. There will be functions like *(::Val{2}, x::AbstractFloat) that are defined and used explicitly to be overloaded. (Multiplication by small integers and simple fractions can often be done in a type-specific way that is faster than general multiplication.)


The way you’re describing it is unfortunately not possible to get efficiency. At least, it would be a large mathematical achievement to make it possible. Let me explain a little bit.

You’re talking about building a Math Library. Currently the most advanced math library in Julia is @musm’s SLEEF.jl

Another project which is doing this is Libm.jl, which is @pkofod’s GSoC project:

Having spent a good amount of timing following and helping benchmark the previous Libm.jl turned Amal.jl turned Sleef.jl, I can say that there are a lot of details to get things right. Essentially, for every function you are looking to create an approximation which gets to the right amount of accuracy to cover the range of values you want to calculate. So if you’re working with Float64, you want to come up with discretizations that, for any Float64 input, will have accuracy to Float64. This limits both the range of values you can expect, and the accuracy you need. To get this fast, you can usually cover large ranges of values using semi-generic methods (a Taylor expansion with an appropriate degree), but then you always have to work out the details. You need to specialize on the values close to k*2pi to get good accuracy near zeros of sin and cos. You need to specialize on the subnormal numbers, for example:

You need to do a lot of by-hand tuning if you want this to be fast and give the accuracy the user would expect. This is the complete opposite of generic programming. But it makes sense: generic algorithms are written using multiple dispatch, and they are performant because the functions at the very bottom were tuned by hand. However, these are the functions at the bottom which need to be tuned by hand.

Since these functions are all very small and when tuned say by using a table lookup (a precomputed table of values), you’re in a domain where you have to avoid any extra overhead. So while “using optimization to find the right amount of Taylor coefficients” or “optimize to find a good polynomial approximation” sound like great ideas, you have to notice that you cannot do this “on the fly” because that optimization problem would be the vast majority of the calculation time. So you cannot hope to make arbitrary precision “fast” here: it’s just the nature of the problem.

However, Julia is well-suited for tuning via adding dispatches. You can write a generic implementation on AbstractFloat, and then use your tools to find the right Taylor coefficients / polynomial approximation for Float16, Float32, Float64, Float128, … and have each number type have a specialization that is fast for it, each hand-tuned and tested. Julia is perfectly suited to creating such a library, but of course, that is a lot of work.


To be clear, I don’t intend to write functions that are even close to as fast as currently existing methods for Float32 and Float64. This is not intended to compete with existing type-specialized methods, but to supplement them. When using Newton iterations, the AbstractFloat methods will call the corresponding fast Float64 methods (e.g. from SLEEF or the standard library) in order to obtain a starting guess.

I do intend to do things like “using optimization to find the right amount of Taylor coefficients”, but this will not be done at run-time, and probably not even at compile-time. There will be pre-computed tables that give the number of coefficients as function of precision(x), which should be a compile-time constant. (Allowing the precision to change at run-time like BigFloat does will not be allowed. It should be a type parameter or a constant.)

The Taylor coefficients themselves will be computed at compile-time using BigFloat

There will be no guarantee of accuracy if the precision of x is not equal to precision(x). In other words, subnormals will not be considered.