LAPACK error from quadgk integration

I’m trying to integrate the following formula. It computes an electric potential at a point from a circular ring of charge.
\phi=\int_0^{2\pi} \frac{1}{\sqrt{(x-cos(t))^2+(y-sin(t))^2+z^2)}}dt
I’ve implemented the function itself like so.

function unknotintegralfunction(a,b,c,t)
	1/(sqrt(((a.-cos.(t))^2).+((b.-sin.(t))^2).+c^2)) #using .math means element wise
end

And the integration like so, note that quadorder is set to 1000 currently.

function unknotpotential(a,b,c) 
	ans = quadgk(t->unknotintegralfunction(a,b,c,t),0,2*pi,order=quadorder) 
	# we'll use the default errors to start
end

Passing a test point at 2,2,2 generates the following LAPACK error

LinearAlgebra.LAPACKException(462)

    chklapackerror@lapack.jl:38[inlined]
    stev!@lapack.jl:3749[inlined]
    eigvals!(::LinearAlgebra.SymTridiagonal{Float64,Array{Float64,1}})@tridiag.jl:292
    eigvals(::LinearAlgebra.SymTridiagonal{Float64,Array{Float64,1}})@tridiag.jl:293
    eignewt(::Array{Float64,1}, ::Int64, ::Int64)@gausskronrod.jl:44
    kronrod(::Type{Float64}, ::Int64)@gausskronrod.jl:193
    macro expansion@gausskronrod.jl:257[inlined]
    cachedrule@gausskronrod.jl:257[inlined]
    do_quadgk(::Main.workspace332.var"#1#2"{Int64,Int64,Int64}, ::Tuple{Float64,Float64}, ::Int64, ::Nothing, ::Nothing, ::Int64, ::typeof(LinearAlgebra.norm))@adapt.jl:7
    (::QuadGK.var"#28#29"{Nothing,Nothing,Int64,Int64,typeof(LinearAlgebra.norm)})(::Function, ::Tuple{Float64,Float64}, ::Function)@adapt.jl:179
    handle_infinities@adapt.jl:113[inlined]
    #quadgk#27@adapt.jl:177[inlined]
    #quadgk#26@adapt.jl:173[inlined]
    unknotpotential(::Int64, ::Int64, ::Int64)@Other: 2
    top-level scope@Local: 1[inlined]

I’m way out of my depth with this and I’m not too sure where I’m going wrong. Is the quadrature method that I’m using incorrect for the type of function I’m trying to integrate? Or is it something else I’ve done wrong?

quadorder is undefined in unknotpotential. Which value did you use? I tried a couple of values but can’t reproduce your error. Please share your versioninfo() and BLAS.vendor()

Hi Andreas, sorry I didn’t make it clear, I noted that quadorder was set to 1000. I’m currently running the code in a pluto notebook, the current QuadGK version is 2.4.1. I assume I’m using OpenBLAS since I haven’t touched that at all upon installing the basic Julia package. Running LinearAlgebra.BLAS.vendor() tells me I’m using

:openblas64

After reading your comment, I’ve changed the quadorder to 5 and the error disappears. I assume I had some sort of convergence problem?

Sorry about that. I just copied your code withou reading your comment carefully. I’m now able to reproduce your issue. The error code indicates a convergence issue in the eigen solver but it’s because the input matrix contains NaNs. The b vector in QuadGK.jl/gausskronrod.jl at ecde924d54a0d576340794883478aaac40f9801a · JuliaMath/QuadGK.jl · GitHub ends up with NaNs in the trailing part when n is large. I suppose it’s a bug in the QuadGK code but I’m pretty sure you should never use that many quadrature points. Using just five seems to give a pretty small error for your function.

1 Like

Realize that the quadrature order N is not the total number of quadrature points — the number of quadrature points used for each subinterval is 2N+1, but the whole point of an adaptive algorithm like quadgk is that it subdivides the interval into more and more subintervals until convergence is achieved.

You want a relatively low-order rule for each subinterval so that it doesn’t waste integrand evaluations unnecessarily. In most cases you shouldn’t change the quadrature order from the default unless you are trying to use BigFloat precision with a smooth function and need to increase the convergence rate to get a huge number of digits.

It looks like the algorithm QuadGK.kronrod is using to get the Gauss–Kronrod points and weights is hitting the limits of Float64 precision if you ask it for the Gauss–Kronrod rule of order 1000, but normally one should never want this so I guess the algorithm was not designed with that kind of scaling in mind. (The same algorithm works fine for BigFloat precision, i.e. if you do QuadGK.kronrod(BigFloat, 1000) it succeeds.)

(If you need Gaussian-quadrature rules of extremely high order, see also the FastGaussQuadrature.jl package, though it’s not designed for adaptive integration, i.e. it doesn’t do Gauss–Kronrod or Gauss–Patterson weights.)

4 Likes