ANN: BitIntegers.jl (Int256, ...) and BitFloats.jl (Float80, Float128)

I’m glad to present two new registered packages which add more “native-like” types to Julia, mostly implemented in the same way as Base builtin integer and floating point types.

Both packages can lead to segfaults that I don’t understand, and they are under-tested, so they must be considered as experimental; other contributors will be needed to overcome the shortcomings (read the respective READMEs for more information).
That said, they can already be useful :smiley:

BitIntegers.jl exports signed and unsiged integer types of size 256, 512 and 1024 bits, but any other size (multiple of 8 bits) can be created easily via a macro.
The main unimplemented features are division operations, for which intrinsics (LLVM builtin) don’t work, at least on my machine. So this is currently done via conversion to/from BigInt, which is very slow.

BitFloats.jl simply wraps two floating-point types exposed by LLVM:

  • Float80 is apparently not available on all machines, but it works OK on mine. The outstanding issue is that currently creating arrays of them lead easily to segfaults (this is a bug in julia which should go away reasonably soon)
  • Float128 is quite slow for most operations: LLVM doesn’t implement those (on my machine), so conversion to/from BigFloat is done.

Overall I was amazed that these builtin-like types could be implemented in packages, with relatively few lines of codes; but there is a lot of duplication with Base code; I would be happy to contribute to a refactoring effort in order to reduce duplication.

Again, these packages are at a very experimental stage, and could be seen as only a proof of concept; in particular, it’s not clear that BitFloats.jl can ever reach maturity, but it may help in the development of other solutions.

Happy hacking!

18 Likes

Fantastic work! Most of what DiffEq needs are these Float80 and Float128 types, so I am happy to see work in this area.

4 Likes

Any recent work on Float80?

Just tried it with Julia 1.7.1. Adding/multiplication/subtraction/division work, but anything as simple as sqrt(Float80(1)) results in

Module IR does not contain specified entry function

Stacktrace:
[1] sqrt(x::Float80)
@ BitFloats ~/.julia/packages/BitFloats/qTO7E/src/BitFloats.jl:590
[2] top-level scope
@ In[29]:1
[3] eval
@ ./boot.jl:373 [inlined]
[4] include_string(mapexpr::typeof(REPL.softscope), mod::Module, code::String, filename::String)
@ Base ./loading.jl:1196

If you want higher precision Floats, I would recommend DoubleFloats

Thx. Are they any faster than Quadmath?

I hoped that hardware supported Float80 can be faster than Float128.

According to https://github.com/JuliaMath/DoubleFloats.jl, it’s very roughly 2-10x faster than Quadmath. Float80 would b e faster, but I’m not sure how much.

Thanks, this is great.

Also exp and log for it will soonish get significantly faster (https://github.com/JuliaMath/DoubleFloats.jl/pull/136)

(So far looks like about 1.5 times faster than QuadMath on the ODE I am trying it for). Thanks again!

1 Like