[ANN] Arblib.jl

Arblib.jl

Hello everyone! I’m pleased to announce the release of Arblib.jl version 1.0.0! Arblib.jl is a wrapper of Arb, a library for rigorous, arbitrary precision, numerics based on ball arithmetic. Since a few months back Arb is a part of FLINT, before it was a separate library.

The goal of Arblib.jl is supply a low lever wrapper of the methods in Arb, as well as a high level interface. The low level wrapper should allow for writing methods using mutability, and with performance very close to that of those written in C. The high level interface should make it easy to use in generic Julia code, similarly to how BigFloat is a wrapper around the MPFR library. In addition, it should be possible to seamlessly switch between the high level interface and the low level wrapper when needed.

There are already multiple other wrappers for Arb, including Nemo and ArbNumerics.jl. Compared to both of these Arblib.jl has a stronger focus on mutability and the low level wrapper. The high level interface of Nemo is also mostly focused on the AbstractAlgebra.jl universe.

Version 0.1.0 of the package was released in October 2020, so it has been a long time in the making!

Examples

Some examples of the high level interface:

julia> using Arblib, SpecialFunctions

julia> sin(Arb(1)) + exp(Acb(5, 4))^2 # Some elementary functions
[-3204.01004684383875410045781564841032005656259070096652381237542043151155723 +/- 3.31e-72] + [21792.06557805986636823313294686313092865354316154564677314220247182417305111 +/- 4.65e-72]im

julia> besselj(Arb(Ο€), Arb(1 // 2)) # Some special functions
[0.00175952799476796625069318818831373840939006931720588178138608727360903797650 +/- 5.61e-78]

julia> Arblib.integrate(sin, 0, 10) # Integrate sin from 0 to 10
[1.83907152907645245225886394782406483451993016513316854683595373104879258687 +/- 5.15e-75]

julia> Arblib.integrate(z -> 1/z, Acb(1, -5), Acb(1, 5)) # Integrate 1/z from 1 - 5i to 1 + 5i
[+/- 2.02e-75] + [2.74680153389003172172254385288992229730199919179940161793956671182574846633 +/- 2.83e-75]im

Compare computing \sqrt{x^2 + y^2} using mutable arithmetic with the default.

julia> using Arblib, BenchmarkTools

julia> x = Arb(1 // 3)
[0.33333333333333333333333333333333333333333333333333333333333333333333333333333 +/- 4.78e-78]

julia> y = Arb(1 // 5)
[0.20000000000000000000000000000000000000000000000000000000000000000000000000000 +/- 3.89e-78]

julia> res = zero(x)
0

julia> f(x, y) = sqrt(x^2 + y^2)
f (generic function with 1 method)

julia> f!(res, x, y) = begin
           Arblib.sqr!(res, x)
           Arblib.fma!(res, res, y, y)
           return Arblib.sqrt!(res, res)
       end
f! (generic function with 1 method)

julia> @benchmark f($x, $y) samples=10000 evals=500
BenchmarkTools.Trial: 6390 samples with 500 evaluations.
 Range (min … max):  624.406 ns … 379.276 ΞΌs  β”Š GC (min … max):  0.00% … 83.71%
 Time  (median):     814.207 ns               β”Š GC (median):     0.00%
 Time  (mean Β± Οƒ):     1.552 ΞΌs Β±  10.335 ΞΌs  β”Š GC (mean Β± Οƒ):  22.72% Β±  3.46%

   β–‚β–†β–ˆβ–†β–ƒβ–‚                                                       
  β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‡β–…β–…β–…β–„β–ƒβ–ƒβ–ƒβ–ƒβ–‚β–‚β–‚β–‚β–‚β–‚β–‚β–‚β–‚β–‚β–‚β–‚β–‚β–‚β–‚β–‚β–‚β–β–‚β–‚β–‚β–‚β–‚β–‚β–‚β–‚β–β–‚β–β–‚β–‚β–‚β–‚β–β–‚β–β–‚β–β–β–‚β–‚β–β–‚ β–ƒ
  624 ns           Histogram: frequency by time         2.96 ΞΌs <

 Memory estimate: 448 bytes, allocs estimate: 8.

julia> @benchmark f!($res, $x, $y) samples=10000 evals=500
BenchmarkTools.Trial: 10000 samples with 500 evaluations.
 Range (min … max):  371.404 ns …  12.993 ΞΌs  β”Š GC (min … max): 0.00% … 0.00%
 Time  (median):     462.204 ns               β”Š GC (median):    0.00%
 Time  (mean Β± Οƒ):   499.765 ns Β± 309.469 ns  β”Š GC (mean Β± Οƒ):  0.00% Β± 0.00%

    β–‡β–…β–… β–ˆβ–„β–β–‚β–β–   ▁                                              
  β–†β–‚β–ˆβ–ˆβ–ˆβ–‡β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–†β–ˆβ–…β–…β–…β–„β–„β–„β–ƒβ–ƒβ–ƒβ–ƒβ–‚β–‚β–‚β–‚β–‚β–‚β–‚β–β–β–‚β–β–β–β–β–β–β–β–β–β–β–β–β–β–β–β–β–β–β–β–β–β–β–β–β– β–ƒ
  371 ns           Histogram: frequency by time          952 ns <

 Memory estimate: 0 bytes, allocs estimate: 0.
15 Likes

Is this in practice not a breaking change? I.e. if you only define (real) numbers, and use the basic arithmetic operators? Also if you use more e.g. the complex type and all operation, at least defined in standard Julia? An SpecialFunctions.jl?

I see e.g. one (seemingly obscure exception):

I was looking at what’s actually breaking (in the underlying library):
https://flintlib.org/doc/history.html#flint-3-0

The C++ interface (flintxx) has been removed. This interface is now maintained in the separate repository GitHub - flintlib/flintxx: C++ interface to FLINT (maintenance only) (Edgar Costa).

That’s of no concern to Julia(?).

There are also minor changes in the C interface, but you’re not exposed top that if you limit yourself to the Julia wrapper, I think. Though since it’s included fron FLINT_jll, I think you could actually call the C interface bypassing the Julia interface (and nothing stopping that such can be done)

If you had used the old C interface, then that might also be breaking then, but there’s no good reason do do such?

@JeffreySarnoff
Is the new version better in some sense, e.g. faster? Should it be used instead of ArbNumberics.jl (which still uses older Arb_jll that should still work).

I believe it may add some features, I’m thinking e.g. if you had just used ArbFloat from it.

I suppose Arb.jl is a little lighter dependency if you don’t need the add-ons. Should ArbNumerics.j wrap your library rather than Arb_jll (or FLINT_jll)?

Do both libraries default to the same precision, that I believe may be the underlying default? Which is lower than for MPFR (but can be set to same or higher)?

Arblib (like ArbNumerics) should be faster than Julia’s BigFloat in almost all cases, even at the same precision, certainly always faster with its default.

Though absolute fastest (if you only need larger than Float64):

It only supports basic arithmetic (not trigonometric), should always be accurate if I understood correctly, though doesn’t propagate NaNs or Infs.

2 Likes

Have you seen GitHub - jump-dev/MutableArithmetics.jl: Interface for arithmetics on mutable types in Julia btw? It can be used to use the mutable API for BigFloats more easily from high-level code (and also generically, i.e. the code doesn’t need to know it is working with BigFloats).

3 Likes

Is this in practice not a breaking change?

Do you mean compared to 0.6.2? There are only a few breaking changes in this case, the most prominent one is that we no longer overload Base.contains, Base.union and Base.intersect, see this PR. The release notes mention a few other breaking changes, but they are mostly if one uses the lower level interface. The FLINT 3.0 change had essentially no breaking changes from the point of view of Arblib.jl!

Is the new version better in some sense, e.g. faster? Should it be used instead of ArbNumberics.jl (which still uses older Arb_jll that should still work).

I would not say that it is not better, nor faster, than ArbNumerics.jl (except if one counts some of the performance improvements from FLINT 3.0, but I believe it would be mostly straightforward to update ArbNumerics.jl to use FLINT 3.0). Some of the key differences from my point of view are

  1. ArbNumerics.jl makes the Arf type (in ArbNumerics.jl it is ArbFloat) actually usable. Arblib.jl only wraps what is implemented in Arb and that is very limited.
  2. The low level interface of Arblib.jl is useful, if one wants to go down that path. But for a lot of code this is not needed.
  3. ArbNumerics.jl keeps the precision in the type, whereas Arblib.jl keeps it in the struct. This has different performance tradeoffs. If one mixes different precisions it is easier to make Arblib.jl type-stable.

Do both libraries default to the same precision, that I believe may be the underlying default? Which is lower than for MPFR (but can be set to same or higher)?

Arblib.jl defaults to 256 bits of precision, so same as BigFloat. The Arb library itself has no default precision.

Arblib (like ArbNumerics) should be faster than Julia’s BigFloat in almost all cases, even at the same precision, certainly always faster with its default.

Numerically I believe Arb is faster in the vast majority of cases. However, the way BigFloat is implemented in Julia tends to make them much faster for garbage collection. This makes it more important to use mutable arithmetic for Arb. For example

julia> using Arblib, BenchmarkTools

julia> @benchmark sum($BigFloat, $(1:10000))
BenchmarkTools.Trial: 5374 samples with 1 evaluation.
 Range (min … max):  846.110 ΞΌs …   2.001 ms  β”Š GC (min … max): 0.00% … 51.80%
 Time  (median):     885.866 ΞΌs               β”Š GC (median):    0.00%
 Time  (mean Β± Οƒ):   927.640 ΞΌs Β± 187.374 ΞΌs  β”Š GC (mean Β± Οƒ):  4.17% Β± 10.03%

  β–†β–ˆβ–ˆβ–‡β–†β–ƒ                                                        β–‚
  β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–…β–β–ƒβ–ƒβ–β–β–ƒβ–β–β–β–β–β–β–β–β–β–β–β–β–β–β–ƒβ–β–β–β–β–β–β–β–β–β–β–β–β–ƒβ–ƒβ–ƒβ–β–„β–†β–†β–‡β–‡β–‡β–‡β–†β–†β–†β–ˆβ–‡β–ˆβ–ˆβ–‡β–ˆ β–ˆ
  846 ΞΌs        Histogram: log(frequency) by time       1.84 ms <

 Memory estimate: 1.98 MiB, allocs estimate: 39999.

julia> @benchmark sum($Arb, $(1:10000))
BenchmarkTools.Trial: 2544 samples with 1 evaluation.
 Range (min … max):  469.879 ΞΌs … 147.426 ms  β”Š GC (min … max):  0.00% … 83.47%
 Time  (median):     489.538 ΞΌs               β”Š GC (median):     0.00%
 Time  (mean Β± Οƒ):     1.968 ms Β±   8.951 ms  β”Š GC (mean Β± Οƒ):  40.59% Β±  8.75%

  β–ˆ                                                              
  β–ˆβ–β–β–β–β–β–β–β–β–β–β–β–β–β–β–β–β–β–β–β–β–β–β–β–β–β–β–β–β–β–β–β–β–β–β–β–β–β–β–β–β–β–β–β–β–β–β–β–β–β–β–β–β–β–β–β–β–β–β–‡β–ˆ β–ˆ
  470 ΞΌs        Histogram: log(frequency) by time       50.3 ms <

 Memory estimate: 1.22 MiB, allocs estimate: 20001.

MultiFloats.jl is a great package! I have made use of it for linear algebra computations at slightly higher than Float64 precision, with good results! But as you say it only implement basic arithmetic.

Have you seen GitHub - jump-dev/MutableArithmetics.jl: Interface for arithmetics on mutable types in Julia btw? It can be used to use the mutable API for BigFloats more easily from high-level code (and also generically, i.e. the code doesn’t need to know it is working with BigFloats).

Yes! Our goal is to add support for this in Arblib.jl eventually. This is mentioned briefly in the documentation and there is an issue about it as well. It is mostly a question of someone finding the time and motivation to get it done!

However, to be fully usable for Arblib.jl one would need an implementation also for elementary and special functions, like sin and SpecialFunctions.besselj. My understanding is that MutableArithmetics.jl currently only covers basic arithmetic.

2 Likes

MA is both an interface and several implementations of the interface. While the BigFloat implementation in MA doesn’t support transcendental functions as of yet, I think, there’s nothing stopping you from adding support for any mathematical function for the MA implementation for your own types. It is simply a matter of adding some methods.

That’s good to know! I have yet to take a closer look at the implementation details.

I made a small PR to show what I mean: proof of concept MutableArithmetics support by nsajko Β· Pull Request #178 Β· kalmarek/Arblib.jl Β· GitHub

1 Like