Unexpected behavior of QuadGK with extended precision floats

This topic arose from a topic in the Fortran Discourse group. Calling the gauss function from the QuadGK package produces different results depending on how it was previously called. Here is an example:

               _
   _       _ _(_)_     |  Documentation: https://docs.julialang.org
  (_)     | (_) (_)    |
   _ _   _| |_  __ _   |  Type "?" for help, "]?" for Pkg help.
  | | | | | | |/ _` |  |
  | | |_| | | | (_| |  |  Version 1.7.0 (2021-11-30)
 _/ |\__'_|_|_|\__'_|  |  Official https://julialang.org/ release
|__/                   |

julia> using Pkg

julia> Pkg.activate(pwd())
  Activating project at `D:\peter\Documents\julia\gauss`

julia> using QuadGK

julia> a, b, N, rtol = (0.0, 10.0, 15, 1e-12)
(0.0, 10.0, 15, 1.0e-12)

julia> x, w = gauss(x -> x*exp(-x^2), N, a, b; rtol)
([0.052256915903249324, 0.17207281981746148, 0.35252363405316234, 0.5854902610962803, 0.8629328816309306, 1.1779498554083911, 1.525268062499262, 1.9013753680233436, 2.3045281764733128, 2.734820566703741, 3.19449505542209, 3.6887952232153474, 4.228145771036031, 4.834472182468265, 5.567321048753693], [0.004537997897989381, 0.025295040988373433, 0.06478758285277397, 0.10658305526595034, 0.12181601871449799, 0.09761634804209081, 0.053935467182869457, 0.01994367882042534, 0.004739912493823425, 0.000686293938931301, 5.622501854849391e-5, 2.337234694016657e-6, 4.1325458981795864e-8, 2.2341007609205385e-10, 1.6298790569549265e-13])

So far, so good. Now we call gauss using extended precision floats from the Quadmath package:


julia> using Quadmath

julia> x128, w128 = gauss(x -> x*exp(-x^2), N, Float128(a), Float128(b); rtol)
ERROR: MethodError: no method matching eigen!(::LinearAlgebra.SymTridiagonal{Float128, Vector{Float128}})
Closest candidates are:
  eigen!(::LinearAlgebra.SymTridiagonal{var"#s859", V} where {var"#s859"<:Union{Float32, Float64}, V<:AbstractVector{var"#s859"}}) at C:\Users\peter\.julia\juliaup\julia-1.7.0+0~x64\share\julia\stdlib\v1.7\LinearAlgebra\src\tridiag.jl:284
  eigen!(::LinearAlgebra.SymTridiagonal{var"#s859", V} where {var"#s859"<:Union{Float32, Float64}, V<:AbstractVector{var"#s859"}}, ::UnitRange) at C:\Users\peter\.julia\juliaup\julia-1.7.0+0~x64\share\julia\stdlib\v1.7\LinearAlgebra\src\tridiag.jl:287
  eigen!(::LinearAlgebra.SymTridiagonal{var"#s859", V} where {var"#s859"<:Union{Float32, Float64}, V<:AbstractVector{var"#s859"}}, ::Real, ::Real) at C:\Users\peter\.julia\juliaup\julia-1.7.0+0~x64\share\julia\stdlib\v1.7\LinearAlgebra\src\tridiag.jl:292
  ...
Stacktrace:
 [1] eigen(A::LinearAlgebra.SymTridiagonal{Float128, Vector{Float128}})
   @ LinearAlgebra C:\Users\peter\.julia\juliaup\julia-1.7.0+0~x64\share\julia\stdlib\v1.7\LinearAlgebra\src\tridiag.jl:285
 [2] _gauss(W::var"#9#10", N::Int64, a::Float128, b::Float128, rtol::Float64, quad::Function)
   @ QuadGK C:\Users\peter\.julia\packages\QuadGK\ENhXl\src\weightedgauss.jl:98
 [3] gauss(W::Function, N::Int64, a::Float128, b::Float128; rtol::Float64, quad::Function)
   @ QuadGK C:\Users\peter\.julia\packages\QuadGK\ENhXl\src\weightedgauss.jl:62
 [4] top-level scope
   @ REPL[7]:1

This failure is expected, because of the comment here. However, if we use the extended precision floats from the DoubleFloats package, things seem to work:

julia> using DoubleFloats

julia> xdbl, wdbl = gauss(x -> x*exp(-x^2), N, Double64(a), Double64(b); rtol)
(Double64[0.05225691611293553, 0.17207282041864175, 0.3525236350624268, 0.5854902624187938, 0.8629328831384978, 1.1779498570045717, 1.5252680641393994, 1.9013753697036966, 2.3045281781643947, 2.734820568247412, 3.1944950565277908, 3.6887952234615393, 4.228145770051363, 4.834472180002404, 5.5673210445800185], Double64[0.004537997933209015, 0.0252950411425123, 0.06478758310815018, 0.10658305544485958, 0.12181601866285417, 0.09761634782437345, 0.053935466976588464, 0.01994367871267409, 0.004739912459860015, 0.0006862939331054017, 5.622501809601295e-5, 2.337234685054627e-6, 4.132545921624906e-8, 2.2341008075465655e-10, 1.629879128521409e-13])

And what’s more, if we retry using the Float128 types in the same Julia session, it now works:

julia> x128, w128 = gauss(x -> x*exp(-x^2), N, Float128(a), Float128(b); rtol)
(Float128[5.22569161129355281590362324636664986e-02, 1.72072820418641738678278300568515441e-01, 3.52523635062426768964504641055797052e-01, 5.85490262418793708278740766389837706e-01, 8.62932883138497778141731274016694611e-01, 1.17794985700457169518972674418966631e+00, 1.52526806413939944210389863053493526e+00, 1.90137536970369664528884194452222078e+00, 2.30452817816439486643022637198172738e+00, 2.73482056824741163796547181182112124e+00, 3.19449505652779092096731348130792334e+00, 3.68879522346153912612796621348456744e+00, 4.22814577005136304491699502645117636e+00, 4.83447218000240404813092181308543794e+00, 5.56732104458001862835505499241222507e+00], Float128[4.53799793320901466773458481980365458e-03, 2.52950411425123027019703761417080365e-02, 6.47875831081501751137196651124838148e-02, 1.06583055444859570369041043243227755e-01, 1.21816018662854170285621689431437447e-01, 9.76163478243734443579340096991820804e-02, 5.39354669765884636529007927738331310e-02, 1.99436787126740897238465756744969413e-02, 4.73991245986001496917933486672936223e-03, 6.86293933105401667097410100090416757e-04, 5.62250180960129472988517521941880775e-05, 2.33723468505462705881074585759494734e-06, 4.13254592162490581670041709082337405e-08, 2.23410080754656545689580239325065260e-10, 1.62987912852140896752120781566610171e-13])

All three methods produce approximately the same answers:

julia> ~(a,b) = isapprox(a,b,atol=1e-8)
~ (generic function with 1 method)

julia> all(x128 .~ x) && all(w128 .~ w) && all(xdbl .~ x) && all(wdbl .~ w)
true

My questions are:

  1. Why does the Double64 type work at all?
  2. Why does the Float128 type work only after calling gauss with Double64 types?

TIA!

DoubleFloats has a dependency on GenericLinearAlgebra which provides the methods needed to compute the eigen-decomposition of Float128 matrices as well.

2 Likes

Thanks, it is now clear to me why DoubleFloats works. But looking through the DoubleFloats code for linear algebra, it appears that all new methods involve the DoubleFloat type. How does this affect how Julia treats Float128 matrices?

Once you import DoubleFloats, it also imports GenericLinearAlgebra, which defines eigensolvers for any Number type (by extending functions like LinearAlgebra.eigvals!.

Note that this is an example of type piracy: GenericLinearAlgebra is extending a function from another module (LinearAlgebra) operating on types from another module (Matrix{Number}), and this is precisely why we generally recommend against type piracy: it can cause unexpected interactions when you combine ostensibly unrelated code. (But the composability surprises are arguably worth it in some cases, such as the one here.)

1 Like

Thanks. I had no idea that type piracy could have such far-reaching effects. As you said, they are beneficial in this case :grinning: