ANN: HigherPrecision

I’m proud to announce HigherPrecision.jl. A package intended as a drop-in replacement for Float64 if you need higher precision but BigFloat is too heavyweight. @ChrisRackauckas already gave it a successful spin in DifferentialEquations.jl.
They way it works is that it emulates a 128 bit float (and 256 bit float, not yet implemented) as an unevaluated sum of 2 (resp. 4) Float64. This strategy is known as double-double (resp. quad-double) precision.

The heavy lifting was done from the QD C++ library and I merely ported it to Julia (and implemented the additional Julia specific functions).

Besides the implementation of of quad-double precision (where the QD library gives a good blue print), there are also still a lot of tricky open problems (especially the performance of the transcendental functions). Needless to say that contributions are very welcome :slight_smile:

I hope the library is useful for the broader community.

10 Likes

How does it compare to https://github.com/JuliaMath/DoubleDouble.jl?

2 Likes

DoubleDouble doesn’t have any overloads except for the super basics like +, and so I always had a hard time finding generic algorithms it actually worked with.

This still need some performance updates to some functions but this number type can be thrown into many package codes and just work.

Couldn’t you have submitted a PR to DoubleDouble to add the missing/faster methods rather than creating a new package from scratch?

It seems like a shame to have two packages trying to do basically the same thing. It would be great if these could still be merged.

4 Likes

I agree with @stevengj here. In this case if the purpose of the package is to introduce a floating point type, it would be great to have a single well-tested implementation rather than multiple implementations to choose from. DoubleDouble.jl is under the JuliaMath umbrella, thus I give it extra credit by default as a non-expert-in-the-matter user, does it make sense?

However, if the authors had good reasons to start from scratch like for instance self-education, or if they found fundamental design flaws in DoubleDouble.jl, that is another history…

HigherPrecision.jl is built off of QD which may have made the development easier, and even Simon has said DoubleDouble.jl is effectively abondonwere so I don’t see an issue starting from scratch.

2 Likes

In this particular case, the elaboration from arithmetic to elementary functions requires certain design level interdependencies. The older DoubleDouble package really was not good candidate for that purpose. I have convert some of QD’s transcendentals to Julia, and they lean heavily on C++ particulars. @saschatimme has given us a reasonable place to supplement, as need and demand be.

For anyone interested, here is Julia code that, for some of the arithmetic functions, computes Float64 pairs (highpart, lowpart) better than QD’s comparable code (with references to the papers consulted)
.

DoubleDouble’s type and HigherPrecision’s type are parameterized differently: DoubleDouble’s Double is parameterized on the wrapped floating-point type, whereas HigherPrecision’s DoubleFloat64 is parameterized on a compute “mode” (fast or slow). It would be perfectly sensible to merge them by adding a compute-mode parameter C to Double{T,C}. Once that is done, all of the HigherPrecision methods could be easily ported to DoubleDouble. (Some of them might initially be specific to Double{Float64}, but that’s fine.)

It really makes no sense to me to keep two packages around, one of which is “abandonware”. Of course, we could slowly deprecate the DoubleDouble package, but it seems like it would be much nicer for most users to merge them by a pull request on DoubleDouble. (A DoubleDouble PR would also be an opportunity for JuliaMath people to give specific feedback.)

We also have Base.TwicePrecision (nicer in 0.7 than it is in 0.6). Although using that could conceivably be a bit fraught because it’s the engine that underlies the ability of ranges to hit their endpoints exactly, and so there are a few methods that dispatch specially.

My experience in other packages leads me to think that binary dependencies are to be avoided wherever possible. For a huge C library it might be worth it because it wouldn’t be easy to replicate, but in this case we already have credible pure-Julia implementations.

1 Like

everyone is correct (has the metaparameter branch merged yet?)

To make the gentleness of Julia when floating carry forward – we need to collaborate and provide a replacement for the erstwhile DoubleDouble with extended precision elementary functions. Assuming the extended precison Floating Point trig @saschatimme uses is, at worst, accurate to within 3ulp it is an appropriate resource. Getting those functions to stay within +/- 2ulp takes triple precision work. Between DoubleDouble and TwicePrecision and my ErrorfreeArith we have the basics well covered and explored. We need to move from the many to the emergent one. And that should be fun. :dancer: Let’s not be bound by the specifics of – while also getting full advantage of – the codes now resting. Step 1 is to get wise about the parameterization to be favored. Step 2 is to taylor the errorfree routines to work nicely thereinto. Steps 3 and 4 give doubly maths back to the people. by the sixth step its golden. Panel what say you?

2 Likes

My motivation to start this library was foremost, that I need it for another package I am working on. I looked at DoubleDouble.jl, but wasn’t comfortable to just add everything I need for a couple of reasons:

  1. It is simply faster if I do it on my own
  2. It is currently not using FMA instructions, so this would be quite a disruptive change, since currently the Julia images has to be recompiled to make use of these.
  3. The amount of code which would have been needed to be reviewed would have been quite a lot. Since @simonbyrne noted in an issue that he does not have the capacity to work on the package (which is completely fine!), I did not expect that anybody would have enough time to properly deal with the review.
  4. The licence would be needed to be changed to a BSD style licence
  5. The goal is actually to also do Quad-double precision. I assume a package named DoubleDouble.jl should not contain QuadDouble precision…

I feel a little bit bumped by the sentiment that if something already exists under some organization, that this is the authoritative package and there shouldn’t be similar packages moving things forward.

I am not personally attached where the codes lives or whether my package will ultimately be used. If the sole effect of this package is to move things forward to a solid implementation, I am more than happy. I will gladly contribute to any community effort.

12 Likes

That’s certainly true; coordinating with other people always takes more work. But sometimes it is worth it … sometimes after the fact: you prototype something in your own package and then merge it with another package afterwards.

I think it would be good to have a package that works with FMA if Julia is compiled that way, but which also works (perhaps more slowly) with the out-of-the box Julia. Is it so hard to support both at once, with a few if statements?

Why? The MIT and 3-clause BSD licenses are effectively equivalent, and there is no problem with merging MIT and BSD code. (The license would then be the conjunction of the two licenses.)

Actually, because Double{T} is parameterized on T<:AbstractFloat, in principle it can express quad precision with Double{Double{Float64}}. (The code needs to be changed in a few places to support this, most notably to make AbstractDouble a subtype of AbstractFloat rather than Real.)

Sometimes it is better to start over from scratch, obviously, but I think that you’ll find that there is a strong preference in most open-source communities for pooling efforts wherever it is practical to do so. I’m not trying to be negative — I think it is great that you are working on this — just saying I think it is worth making the attempt at merging here.

(The fact that @simonbyrne no longer has time to work on DoubleDouble would mean that you’d be effectively taking over that package. Taking it over and replacing its implementation with your own seems preferable to me to having it just become defunct.)

1 Like

While it is nice to minimise the number of overlapping packages, I certainly appreciate that it can be easier and less pressure to start afresh: you get a clean slate and no worries about toe stepping or backward compatibility. In fact, this has been a common theme: we’ve gone through at least 3 zlib packages, and I think RCall.jl was the 3rd attempt at an R interface.

For what it’s worth, my 2c:

  • DoubleDouble.jl was written before we had any support at all (and before I had a computer which supported it). FMAs make this sort of thing much easier. The fma function does still work without rebuilding the system image, it will just be slower as it falls back on a software implementation.
  • In DoubleDouble.jl, I think making the Double{T} type parametric was a bad idea: there is not a lot of demand for types other than Float64, and it makes implementing transcendental functions more difficult.
  • I like the idea of parametrising on accuracy guarantees.
  • My only qualm is that the name “HigherPrecision.jl” is a bit vague: there are lots of ways to get higher precision (BigFloats, ArbFloats, Float128)
1 Like

It would be nice to have (at least an informal) protocol for retiring packages that are effectively abandonned. Repo status badges are useful, and so are warning messages.

GitHub recently released an archive feature for packages that are no longer maintained:

https://github.com/blog/2460-archiving-repositories

It is even more effective than status badges.

4 Likes

I agree that transcendental functions are easier to implement for a fixed precision. But nothing prevents you from defining sin etcetera only for Double{Float64}: just because you have a parameterized type doesn’t mean that every method needs to handle all possible parameters. And parameterizing allows you to share some (non-transcendental) code if you want to support e.g. Double{Double{Float64}} for quad precision.

(Parameterizing doesn’t make anything worse, and it leaves open more future flexibility.)

3 Likes

When developing TwicePrecision, I found it incredibly useful to be able to parametrize on Float16, because I could exhaustively test every single possible Float16 value (even all pairs, for binary operations, if I was patient enough). This genuinely caught some issues, especially for subnormals. However, actually using it for Float16 would be insane; TwicePrecision{Float16} is no where near as useful as Float32 is.

4 Likes

Instead of freezing repos (via loosely agreed-upon markers as you mention, or using technically enforced barriers as those @juliohm mentions), it would be even better IMHO for Julia organizations to adopt an abandoned project policy that would ensure that interested persons could start contributing to dormant packages and eventually take over maintainership if they so desire.

2 Likes

Note that since only the owner of the repo can freeze it, there is no need to rely on any “markers”; presumably the author knows when he/she no longer intends to maintain the library.

Making this information known is merely a courtesy.

The abandonned project policy is a great idea.

A while ago I forked DoubleDouble, and implemented exactly this. GitHub - perrutquist/DoubleDoubles.jl: Extended precision arithmetic for Julia

I did this mainly as an exercise, to learn about types, conversion, promotion, etc. Recursive types turn out to be trickier than one might expect…

I more or less abandoned the project for a while because I decided to focus all my efforts on finishing my thesis. (In hindsight, this was the right decision.) I might get back to it now, because I do need high precision in one of my work projects.