# BigFloat type promotion != precision promotion

Hi all

I’m a bit confused on the rules of precision “promotion” using BigFloats. As far as I can see, there are none. See example below. From this, I can conclude that

• Type promotion is made (the result of an operation with `Float64` and `BigFloat` returns a `BigFloat`)
• Precision is not set based on the arguments of an operation, i.e. the result of an operation does not get a promoted/demoted precision. I was a bit surprised to see examples `e` and `f` below. I would have expected to see the precision of `e` and `f` to be 512 bits, not the default 256.
• Precision of an operation is always set to 256 bits (the default for a `BigFloat`) unless the operation is made in a context with modified precision using `setprecision()`.

Is this correct?

When computing with BigFloats having different precisions, how is this handled “internally”? Are these variables passed on to MPFR straight away and MPFR does the arithmetic? Or are they rounded down/up to the contextual precision first (creating intermediates)?

Any insights are more than welcome.

My context: I’m writing code that should handle a mix of Float64 and BigFloats (with different precisions) and the user should be able to specify the precision in which numerical operations are done and the precision in which the result is returned (can be different from the precision in which the operations are done). So if someone has a template for such a thing, feel free to share…

``````(v1.8) julia> a = 1.0
1.0

(v1.8) julia> typeof(a), precision(a)
(Float64, 53)

(v1.8) julia> b = BigFloat("3.14", 128)
3.14

(v1.8) julia> typeof(b), precision(b)
(BigFloat, 128)

(v1.8) julia> c = a + b
4.140000000000000000000000000000000000000470197740328915003187494614888898271127

(v1.8) julia> typeof(c), precision(c)
(BigFloat, 256)

(v1.8) julia> d = BigFloat("3.14", 512)
3.13999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999989

(v1.8) julia> typeof(d), precision(d)
(BigFloat, 512)

(v1.8) julia> e = a+d
4.140000000000000000000000000000000000000000000000000000000000000000000000000008

(v1.8) julia> typeof(e), precision(e)
(BigFloat, 256)

(v1.8) julia> f = b+d
6.280000000000000000000000000000000000000470197740328915003187494614888898271136

(v1.8) julia> typeof(f), precision(f)
(BigFloat, 256)

(v1.8) julia> g = BigFloat("0.0", 1024)
0.0

(v1.8) julia> typeof(g),precision(g)
(BigFloat, 1024)

(v1.8) julia> g += a
1.0

(v1.8) julia> typeof(g),precision(g)
(BigFloat, 256)

(v1.8) julia> h = BigFloat("0.0", 1024)
0.0

(v1.8) julia> h += d
3.140000000000000000000000000000000000000000000000000000000000000000000000000008

(v1.8) julia> typeof(h),precision(h)
(BigFloat, 256)

(v1.8) julia> i = BigFloat("0.0", 1024)
0.0

(v1.8) julia> typeof(i),precision(i)
(BigFloat, 1024)

(v1.8) julia> setprecision(512) do
j = a+i
typeof(j), precision(j)
end
(BigFloat, 512)
``````

The `precision` associated with BigFloats (once set or reset) is best understood as the value used when new BigFloat values are generated. While you are allowed to specify a different precision with each new BigFloat, these values will not do what you may have reasonably expected them to do. The reasons are deep in the underlying C library (mpfr).
Best use is to decide what precision you want to be using and set it early
`setprecison(BigFloat, 512)`, and let that setting persist. To see the values that result as a Float64, just do `machinevalue = Float64(bigfloatvalue)`. To see the values at higher precisions that are less than (here, 512), use rounding with the `sigdigits` keyword.

Also take a look at `Quadmath.jl` for 128bit floats; for really variable by variable settable precision read the `ArbNumerics.jl` docs.

Yes, the variables are passed as-is to MPFR, along with a “result” variable allocated the current default precision, and MPFR performs the operation on them as-is (in-place) without any explicit conversion.

Thanks. I’ve come across ArbNumerics.jl but it didn’t seem to work with GenericFFT.generic_fft which is crucial for me. Or maybe I’m doing something wrong…

does this work for you?

## Discrete Fourier Transforms

• `dft`, `inverse_dft`

The DFT is often used as a tool to analyze a collection of samples as a sum of periodic signals q.v. DFT. ArbNumerics implements a rigorous DFT and Inverse DFT.
The following example is taken from the web page linked above.

``````julia> using ArbNumerics

julia> fourvec =  [1.0+0.0im, 2.0-1.0im, 0.0-1.0im, -1.0+2.0im]
4-element Array{Complex{Float64},1}:
1.0 + 0.0im
2.0 - 1.0im
0.0 - 1.0im
-1.0 + 2.0im

julia> x = map(ArbComplex, fourvec)
4-element Array{ArbComplex{128},1}:
1.0 + 0im
2.0 - 1.0im
0 - 1.0im
-1.0 + 2.0im

julia> dft_x = dft(x)
4-element Array{ArbComplex{128},1}:
2.0 + 0im
-2.0 - 2.0im
0 - 2.0im
4.0 + 4.0im

julia> invert_dft_x = inverse_dft(dft_x)
4-element Array{ArbComplex{128},1}:
1.0 + 0im
2.0 - 1.0im
0 - 1.0im
-1.0 + 2.0im

julia> x == invert_dft_x
true
``````
``````
julia> fourvec = [1.0, 2.0, -1.0, -2.0]
4-element Array{Float64,1}:
1.0
2.0
-1.0
-2.0

julia> x = map(ArbFloat, fourvec)
4-element Array{ArbFloat{128},1}:
1.0
2.0
-1.0
-2.0

julia> dft_x = dft(x)
4-element Array{ArbComplex{128},1}:
0 + 0im
2.0 - 4.0im
0 + 0im
2.0 + 4.0im

julia> invert_dft_x = inverse_dft(dft_x)
4-element Array{ArbComplex{128},1}:
1.0 + 0im
2.0 + 0im
-1.0 + 0im
-2.0 + 0im

julia> map(real, invert_dft_x)
4-element Array{ArbReal{128},1}:
1.0
2.0
-1.0
-2.0

julia> ans == x
true
``````

For a more in depth example, please see this.

Yes! Thanks. One step closer I also need an SVD of a matrix and a polynomial root finder (all roots) …

I can define polynomials with ArbFloat and ArbComplex and operations seem to go flawlessly. However, when calling roots (after loading the GenericLinearAlgebra package) fails with the following error:

``````ERROR: MethodError: no method matching eigsortby(::ArbComplex{128})
Closest candidates are:
eigsortby(::Real) at c:\Users\gvdeynde\AppData\Local\Programs\Julia-1.8.3\share\julia\stdlib\v1.8\LinearAlgebra\src\eigen.jl:130
eigsortby(::Complex) at c:\Users\gvdeynde\AppData\Local\Programs\Julia-1.8.3\share\julia\stdlib\v1.8\LinearAlgebra\src\eigen.jl:131

Stacktrace:
 lt(o::Base.Order.By{typeof(LinearAlgebra.eigsortby), Base.Order.ForwardOrdering}, a::ArbComplex{128}, b::ArbComplex{128})
@ Base.Order .\ordering.jl:119
 sort!
@ .\sort.jl:505 [inlined]
 sort!
@ .\sort.jl:571 [inlined]
 sort!
@ .\sort.jl:661 [inlined]
 #sort!#8
@ .\sort.jl:722 [inlined]
 sorteig!
@ c:\Users\gvdeynde\AppData\Local\Programs\Julia-1.8.3\share\julia\stdlib\v1.8\LinearAlgebra\src\eigen.jl:140 [inlined]
 eigvals!(A::Matrix{ArbFloat{128}}; sortby::typeof(LinearAlgebra.eigsortby), kwargs::Base.Pairs{Symbol, Union{}, Tuple{}, NamedTuple{(), Tuple{}}})
@ GenericLinearAlgebra C:\Users\gvdeynde\.julia\packages\GenericLinearAlgebra\w5JSi\src\eigenGeneral.jl:258
 eigvals!
@ C:\Users\gvdeynde\.julia\packages\GenericLinearAlgebra\w5JSi\src\eigenGeneral.jl:250 [inlined]
 #eigvals#101
@ c:\Users\gvdeynde\AppData\Local\Programs\Julia-1.8.3\share\julia\stdlib\v1.8\LinearAlgebra\src\eigen.jl:335 [inlined]
 eigvals
@ c:\Users\gvdeynde\AppData\Local\Programs\Julia-1.8.3\share\julia\stdlib\v1.8\LinearAlgebra\src\eigen.jl:335 [inlined]
 roots(p::Polynomial{ArbFloat{128}, :x}; kwargs::Base.Pairs{Symbol, Union{}, Tuple{}, NamedTuple{(), Tuple{}}})
@ Polynomials C:\Users\gvdeynde\.julia\packages\Polynomials\WOVei\src\polynomials\standard-basis.jl:484
 roots(p::Polynomial{ArbFloat{128}, :x})
@ Polynomials C:\Users\gvdeynde\.julia\packages\Polynomials\WOVei\src\polynomials\standard-basis.jl:464
 top-level scope
@ REPL:1
``````

I have no idea if this is related to ArbNumerics.jl, Polynomials.jl, GenericLinearAlgebra.jl or std/LinearAlgebra.jl…

Somehow eigsortby(::ArbComplex) is not good enough for eigsortby(::Complex).

`Complex` is a concrete type, not an abstract. So methods defined for `Complex` do not accept `ArbComplex`. Having not looked at `ArbNumerics` at all, I’m surprised that it uses a custom `ArbComplex` instead of simply using `Complex{ArbFloat{...}}`. Hopefully there’s a good reason because the incompatibility it causes is crippling.

This stack trace is pretty deep and you start with reals, so this doesn’t look like a super easy fix (without piracy). It might work if you force your input polynomial to be `Complex` from the start (e.g., wrap it in `Complex` so that it’s a `Polynomial{Complex{ArbFloat{...}}, :x}` instead of `Polynomial{ArbFloat{...}, :x}`)?

There is a good reason. The Arb C library has internal nontrivial C structures for each supported numeric type: ArbFloat (point values), ArbReal (ball interval values), ArbComplex( paired ball interval values ). The Julia interface must be strictly compatible (and that takes some doing). An ArbComplex is organized similar to (ArbReal, ArbReal) and not like (ArbFloat, ArbFloat). Complex(ArbReal, ArbReal) just does not work without a huge amount of overloading … and that would not help because the internal math functions are much differently implemented than are the normative Complex functions.

Essentially ArbNumerics is its own numerical tri-relm (ArbFloat, ArbReal, ArbComplex) with many math functions directly supported.
The stdlib LinearAlgebra an GenericLinearAlgebra.jl are designed to expect Julian number types and not indirectly supported C struct numeric encodings.

The Arb C library does have functions supporting its own Matrices of ArbComplex. I have not covered those as there has been no request up to now.
Let me know what sort of problem you are trying to solve.

1 Like

I’m implementing the Carathéodory-Fejér approach for the near-best approximation of the exponential function on the negative real axis based on the Matlab implementation by Schmeltzer and Trefethen https://core.ac.uk/download/pdf/1633681.pdf. It requires:

• Matrix-vector products and other elementary operations on vectors (element-wise power, dot product etc). for matrices and vectors with real elements
• Evaluation of the exponential function for real arguments
• Evaluation of a polynomial (real coefficients, real arguments)
• Forward FFT on a vector with length a power of two (real)
• full SVD of a Hankel matrix (with real entries)
• Finding all roots of a polynomial (real coefficients, so roots are real or come in pairs of complex conjugates)

I have the code working in Float64 and in `BigFloat` but as I mentioned in my original post, I’m not too happy with the “global” `setprecision` approach and the, for me at least unclear, way on how the precision is handled for mixed precision operations (both `Float64` with `BigFloat` and `BigFloat`s with different precisons).

For the FFT, you pointed me to the `dft` implementation in `ArbNumerics.jl`. The `svd` from GenericLinearAlgebra seems to work as well. But the roots function of `Polynomials.jl` uses the eigenvalues-of-the-companion-matrix approach and there this `eigsort` problem pops up. I also had a look at `PolynomialRoots.jl` which implements the Laguerre method but also here the code works with `Float64` and `BigFloat` but not `ArbFloat`.

1 Like

Is this your need: to create polynomials with ArbFloat or ArbReal coeffs and to find their roots?
Do you need to use polynomials that have complex coeffs?

Only polynomials with real coefficients but all roots up to a (user-) requested precision.

I don’t want to be “critical” in any way (I tremendously admire all of you spending time and effort on these projects) but I do find it a pity that you have and would have to re-implement numerical algorithms especially for `ArbFloat`/`ArbReal`/`ArbComplex`. I thought that the whole idea of the “Generic*” packages would be to implement the base algorithms kind of ignoring the type (and hence as a user accepting that there is no low-level machine optimization or other fancy stuff based on the precision of the type for example). It now seems that you or someone else will re-implement each time and again algorithms for `ArbNumerics.jl` that have already been implemented before in other packages.

1 Like

That is not role of ArbNumerics. It is quite difficult to implement functions to work with the Arb C library.
The only person who does that substantively is the author of Arb. The library has a wide range of carefully constructed math functions – for polynomials there are C functions that

• manage memory for polynomial instances
• get the degree, get each coeff, set any coeff, round poly coeffs to prec bits
• negate, add, subtract, multiply by scalar, divide by scalar, multiply, divide with remainder
• evaluate at a given value using horner or rectangular splitting, autoevaluate derivative
• fast multipoint evaluation
• compute the derivative, the integral
• interpolate using Newton or Barycentric
• support for finding roots using explicit Newton steps, root refinement, applying the Graeffe transform

ArbNumerics.jl exists to provide an interface to essential parts of the Arb library that is comfortable for Julia users.
For the sort of genericity you highlight, albeit at about 106 bits of precision, DoubleFloats.jl is useful.
There are two other Julia packages that utilize Arb:
https://github.com/Nemocas/Nemo.jl (has much, including Arb expertise – it is a more abstract tool, but ask)
https://github.com/kalmarek/Arblib.jl (has some polynomial support – you could ask them about root finding)

1 Like

What’s wrong with setting a BigFloat precision at the start and using that precision through the whole algorithm? Is it an efficiency issue? I’d think you’d want user input and the entire calculation done in BigFloat, in order to avoid polluting with order 1e-16 error at the outset.

How far does this pattern get you, which bypasses `eigsort`?

``````using GenericLinearAlgebra, Polynomials
x = ArbFloat.([1.0, 2.0, -1.0, -2.0])
p = Polynomial(x)
M = companion(p)
GenericLinearAlgebra._eigvals!(M)
``````

In Arblib.jl we have a user friendly eigenvalues solver, so using companion matrix will give you the roots (you’ll need to pass `AcbMatrix` though). I suggest reading Arb docs for other root finding: acb_poly.h – polynomials over the complex numbers — Arb 2.23.0 documentation
Those methods should be available in `Arblib.jl`.

We wrap fft methods here: Arblib.jl/acb_dft.jl at master · kalmarek/Arblib.jl · GitHub but I’ve never used them. and I’m not sure what to do with svd though…

If you have specific questions about `Arblib.jl` feel free to ping me here 1 Like

@gvdeynde so I just checked `dft` should work just fine:

``````julia> fourvec = AcbVector([1.0, 2-1im, 0-im, -1+2im], prec=64)
4-element AcbVector:
1.000000000000000000
2.000000000000000000 + -1.000000000000000000im
0 + -1.000000000000000000im
-1.000000000000000000 + 2.000000000000000000im

julia> dft = Arblib.dft!(similar(fourvec), fourvec)
4-element AcbVector:
2.000000000000000000
-2.000000000000000000 + -2.000000000000000000im
0 + -2.000000000000000000im
4.000000000000000000 + 4.000000000000000000im

julia> idft = Arblib.dft_inverse!(similar(dft), dft)
4-element AcbVector:
1.000000000000000000
2.000000000000000000 + -1.000000000000000000im
0 + -1.000000000000000000im
-1.000000000000000000 + 2.000000000000000000im

julia> idft - fourvec
4-element AcbVector:
0
0
0
0
``````

I have poor news for `svd` though: `[Generalized]LinearAlgebra` makes several assumptions about what can be derived from type alone (see e.g. `floatmin` which btw. is wrong for `BigFloat` if you change its precision).

These aren’t satisfied by `Arblib.jl` by design: precision is not part of the type, unlike `ArbNumerics.jl`. It might be possible to fix `GenericLinearAlgebra` not to use `one(T)` but rather `one(t::T)` when such `t` is available (is any of linear algebra operations well defined for `0` dimensional arrays?), but this change is unlikely to happen also in `LinearAlgebra`

1 Like