Type instability in nested quadgk calls

Hi! I am having trouble with the type stability of quadgk when the integrand function is using quadgk itself. Here is the MWE:

using QuadGK

f(x) = quadgk(y -> x*y, 0., 1.)[1]
quadgk(f, 0., 1.)

Using code_warntype on the latter function gives that the return type is Tuple{Any, Any}:

julia> @code_warntype quadgk(f, 0., 1.)
MethodInstance for QuadGK.quadgk(::typeof(f), ::Float64, ::Float64)
  from quadgk(f, segs::T...; atol, rtol, maxevals, order, norm, segbuf, eval_segbuf) where T @ QuadGK ~/.julia/packages/QuadGK/7rND3/src/api.jl:80
Static Parameters
  T = Float64
Arguments
  #self#::Core.Const(QuadGK.quadgk)
  f::Core.Const(f)
  segs::Tuple{Float64, Float64}
Locals
  atol::Nothing
  rtol::Nothing
  maxevals::Int64
  order::Int64
  norm::typeof(LinearAlgebra.norm)
  segbuf::Nothing
  eval_segbuf::Nothing
Body::Tuple{Any, Any}
1 ─ %1  = QuadGK.nothing::Core.Const(nothing)
β”‚         (atol = %1)
β”‚   %3  = QuadGK.nothing::Core.Const(nothing)
β”‚         (rtol = %3)
β”‚   %5  = QuadGK.:^::Core.Const(^)
β”‚   %6  = Core.apply_type(Base.Val, 7)::Core.Const(Val{7})
β”‚   %7  = (%6)()::Core.Const(Val{7}())
β”‚   %8  = Base.literal_pow(%5, 10, %7)::Core.Const(10000000)
β”‚         (maxevals = %8)
β”‚   %10 = 7::Core.Const(7)
β”‚         (order = %10)
β”‚   %12 = QuadGK.norm::Core.Const(LinearAlgebra.norm)
β”‚         (norm = %12)
β”‚   %14 = QuadGK.nothing::Core.Const(nothing)
β”‚         (segbuf = %14)
β”‚   %16 = QuadGK.nothing::Core.Const(nothing)
β”‚         (eval_segbuf = %16)
β”‚   %18 = QuadGK.:(var"#quadgk#49")::Core.Const(QuadGK.var"#quadgk#49")
β”‚   %19 = Core.tuple(atol, rtol, maxevals::Core.Const(10000000), order::Core.Const(7), norm, segbuf, eval_segbuf, #self#, f)::Core.Const((nothing, nothing, 10000000, 7, LinearAlgebra.norm, nothing, nothing, QuadGK.quadgk, f))
β”‚   %20 = Core._apply_iterate(Base.iterate, %18, %19, segs)::Tuple{Any, Any}
└──       return %20

I initially thought that maybe this was due to performance of captured variables in closures Β· Issue #15276 Β· JuliaLang/julia Β· GitHub, but the usual fix does not seem to help:

function g(x)
    G = let x=x
        y -> x*y
    end
    quadgk(G, 0., 1.)[1]
end
quadgk(g, 0., 1.)

The code_warntype here shows the same type unstable result Tuple{Any, Any}:

julia> @code_warntype quadgk(g, 0., 1.)
MethodInstance for QuadGK.quadgk(::typeof(g), ::Float64, ::Float64)
  from quadgk(f, segs::T...; atol, rtol, maxevals, order, norm, segbuf, eval_segbuf) where T @ QuadGK ~/.julia/packages/QuadGK/7rND3/src/api.jl:80
Static Parameters
  T = Float64
Arguments
  #self#::Core.Const(QuadGK.quadgk)
  f::Core.Const(g)
  segs::Tuple{Float64, Float64}
Locals
  atol::Nothing
  rtol::Nothing
  maxevals::Int64
  order::Int64
  norm::typeof(LinearAlgebra.norm)
  segbuf::Nothing
  eval_segbuf::Nothing
Body::Tuple{Any, Any}
1 ─ %1  = QuadGK.nothing::Core.Const(nothing)
β”‚         (atol = %1)
β”‚   %3  = QuadGK.nothing::Core.Const(nothing)
β”‚         (rtol = %3)
β”‚   %5  = QuadGK.:^::Core.Const(^)
β”‚   %6  = Core.apply_type(Base.Val, 7)::Core.Const(Val{7})
β”‚   %7  = (%6)()::Core.Const(Val{7}())
β”‚   %8  = Base.literal_pow(%5, 10, %7)::Core.Const(10000000)
β”‚         (maxevals = %8)
β”‚   %10 = 7::Core.Const(7)
β”‚         (order = %10)
β”‚   %12 = QuadGK.norm::Core.Const(LinearAlgebra.norm)
β”‚         (norm = %12)
β”‚   %14 = QuadGK.nothing::Core.Const(nothing)
β”‚         (segbuf = %14)
β”‚   %16 = QuadGK.nothing::Core.Const(nothing)
β”‚         (eval_segbuf = %16)
β”‚   %18 = QuadGK.:(var"#quadgk#49")::Core.Const(QuadGK.var"#quadgk#49")
β”‚   %19 = Core.tuple(atol, rtol, maxevals::Core.Const(10000000), order::Core.Const(7), norm, segbuf, eval_segbuf, #self#, f)::Core.Const((nothing, nothing, 10000000, 7, LinearAlgebra.norm, nothing, nothing, QuadGK.quadgk, g))
β”‚   %20 = Core._apply_iterate(Base.iterate, %18, %19, segs)::Tuple{Any, Any}
└──       return %20

Both f and g seem to be type-stable. Is there something I am missing here?

2 Likes

…I don’t know, but looks like the freedom of f’s argument might be for something…
I tried

julia> f(x::Real) = first(quadgk(y -> x*y, .0,1.0));

julia> quadgk(f,.0,1.0)
(0.25, 0.0)

Edit: spoke too soon…

Working through with Cthulhu and DispatchDoctor a bit, it looks like somehow the type instability is being generated out of handle_infinities. I tried cleaning up the closures but no luck yet.

Hi,
I found some unexpected behaviours. If I add a call to f before the call to quadgk(f,0.,1.), then the function quadgk(f, 0., 1.) becomes type stable.

using QuadGK
f(x) = quadgk(y -> x*y, 0., 1.)[1]
f(0.2)    # add function call 
quadgk(f, 0., 1.) # this function is now type stable
julia> @code_warntype quadgk(f, 0., 1.)
MethodInstance for quadgk(::typeof(f), ::Float64, ::Float64)
  from quadgk(f, segs::T...; atol, rtol, maxevals, order, norm, segbuf, eval_segbuf) where T @ QuadGK ~/.julia/packages/QuadGK/7rND3/src/api.jl:80
Static Parameters
  T = Float64
Arguments
  #self#::Core.Const(QuadGK.quadgk)
  f::Core.Const(Main.f)
  segs::Tuple{Float64, Float64}
Locals
  atol::Nothing
  rtol::Nothing
  maxevals::Int64
  order::Int64
  norm::typeof(LinearAlgebra.norm)
  segbuf::Nothing
  eval_segbuf::Nothing
Body::Tuple{Float64, Float64}
1 ─ %1  = QuadGK.nothing::Core.Const(nothing)
β”‚         (atol = %1)
β”‚   %3  = QuadGK.nothing::Core.Const(nothing)
β”‚         (rtol = %3)
β”‚   %5  = QuadGK.:^::Core.Const(^)
β”‚   %6  = Core.apply_type(Base.Val, 7)::Core.Const(Val{7})
β”‚   %7  = (%6)()::Core.Const(Val{7}())
β”‚   %8  = Base.literal_pow(%5, 10, %7)::Core.Const(10000000)
β”‚         (maxevals = %8)
β”‚   %10 = 7::Core.Const(7)
β”‚         (order = %10)
β”‚   %12 = QuadGK.norm::Core.Const(LinearAlgebra.norm)
β”‚         (norm = %12)
β”‚   %14 = QuadGK.nothing::Core.Const(nothing)
β”‚         (segbuf = %14)
β”‚   %16 = QuadGK.nothing::Core.Const(nothing)
β”‚         (eval_segbuf = %16)
β”‚   %18 = QuadGK.:(var"#quadgk#49")::Core.Const(QuadGK.var"#quadgk#49")
β”‚   %19 = atol::Core.Const(nothing)
β”‚   %20 = rtol::Core.Const(nothing)
β”‚   %21 = maxevals::Core.Const(10000000)
β”‚   %22 = order::Core.Const(7)
β”‚   %23 = norm::Core.Const(LinearAlgebra.norm)
β”‚   %24 = segbuf::Core.Const(nothing)
β”‚   %25 = eval_segbuf::Core.Const(nothing)
β”‚   %26 = Core.tuple(%19, %20, %21, %22, %23, %24, %25, #self#, f)::Core.Const((nothing, nothing, 10000000, 7, LinearAlgebra.norm, nothing, nothing, QuadGK.quadgk, Main.f))
β”‚   %27 = Core._apply_iterate(Base.iterate, %18, %26, segs)::Tuple{Float64, Float64}
└──       return %27

Interestingly enough, the order of the calls is very important. As an example, if the call to f is not placed before the very first call to quadgk(f,0.,1.), then the call to f does not help at all with the type stability of the subsequent calls of quadgk(f,0.,1.).

using QuadGK
f(x) = quadgk(y -> x*y, 0., 1.)[1]
quadgk(f, 0., 1.)
f(0.2)    # function call after the first call to quadgk(f,0.,1.)
@code_warntype quadgk(f, 0., 1.) #  ::Tuple{Any, Any}  type unstable!

I do not understand this behaviour. If anybody could shed some light it would be great!

1 Like

I wonder if it has to do with this combination of caches and @generated:

Tagging @stevengj in case he has intuition here

I’ve been messing around a lot with QuadGK internals over the last several months for an extension patch I’ll eventually submit as a PR. There’s definitely some unintuitive interaction between inlining/function specialization and the design of the API layers that wrap the actual workhorse do_quadgk. However, while I’ve been able to get my patch to work (adding more capabilites to the Enzyme extension), I haven’t been able to pin down exactly what’s going on. See this issue for one very non-informative symptom: Huge LLVM dump+LLVM error when mixed activity error expected Β· Issue #2292 Β· EnzymeAD/Enzyme.jl Β· GitHub.

That said, I don’t think nesting QuadGK calls like this is a good idea. Either you should use a package made specifically for higher-dimensional integrals (cubatures), or you should use an interpolant (ApproxFun.jl, FastChebInterp.jl, et cetera) to approximate the output of the inner integral before feeding it into the outer integral.

1 Like

I’ve no idea about (but am very interested in) the reason for the OP’s instability. I just came to say that nested quadgk does have it’s uses, in my experience. For example, when you can compute singular/discontinuity points in the inner integral as a function of the outer variable, and use these as extra points in the inner quadgk call. In that case, nested quadgk can easily be faster than cubature (at least, again, in my experience).

1 Like

Absolutely, I do similar things all the time, but I still use a Chebyshev interpolant as a shim between the two (or, in my case as we speak, 37) nested quadgk calls.

1 Like

@danielwe, at the risk of derailing the thread (admins, feel free to fork), what is the advantage of the interpolant? I thought the outer quadgk would already do a polynomial interpolation to get the integral, so wouldn’t a Chebyshev expansion be redundant?

In my case it’s essential because I do many outer integrals with the output of the inner integral. Something like

f(x) = quadgk(y -> x * y, 0.0, 1.0)[1]
g(z) = quadgk(x -> z * f(x), 0.0, 1.0)[1]
# evaluate g(z) at many different z

I don’t want to keep redoing the inner integral every time f(x) gets evaluated at more or less the same point for different z.

If you’re only doing two integrals like in the OP I suppose there’s no real benefit to this in terms of purely mathematical/computational complexity, but as OP observed, nesting moderately complex adaptive algorithms may have pitfalls due to the resulting code complexity (in this case, failed type inference), so adding an interpolant may at least be worth a try anyway.

Otherwise, a quick solution to OP’s issues is to add a type assertion:

f(x) = quadgk(y -> x * y, 0.0, 1.0)[1]::typeof(x * 1.0)
quadgk(f, 0.0, 1.0)
julia> @code_warntype quadgk(f, 0.0, 1.0)
MethodInstance for QuadGK.quadgk(::typeof(f), ::Float64, ::Float64)
  from quadgk(f, segs::T...; atol, rtol, maxevals, order, norm, segbuf, eval_segbuf) where T @ QuadGK ~/.julia/packages/QuadGK/7rND3/src/api.jl:80
Static Parameters
  T = Float64
Arguments
  #self#::Core.Const(QuadGK.quadgk)
  f::Core.Const(f)
  segs::Tuple{Float64, Float64}
Locals
  atol::Nothing
  rtol::Nothing
  maxevals::Int64
  order::Int64
  norm::typeof(LinearAlgebra.norm)
  segbuf::Nothing
  eval_segbuf::Nothing
Body::Tuple{Float64, Float64}
1 ─ %1  = QuadGK.nothing::Core.Const(nothing)
β”‚         (atol = %1)
β”‚   %3  = QuadGK.nothing::Core.Const(nothing)
β”‚         (rtol = %3)
β”‚   %5  = QuadGK.:^::Core.Const(^)
β”‚   %6  = Core.apply_type(Base.Val, 7)::Core.Const(Val{7})
β”‚   %7  = (%6)()::Core.Const(Val{7}())
β”‚   %8  = Base.literal_pow(%5, 10, %7)::Core.Const(10000000)
β”‚         (maxevals = %8)
β”‚   %10 = 7::Core.Const(7)
β”‚         (order = %10)
β”‚   %12 = QuadGK.norm::Core.Const(LinearAlgebra.norm)
β”‚         (norm = %12)
β”‚   %14 = QuadGK.nothing::Core.Const(nothing)
β”‚         (segbuf = %14)
β”‚   %16 = QuadGK.nothing::Core.Const(nothing)
β”‚         (eval_segbuf = %16)
β”‚   %18 = QuadGK.:(var"#quadgk#49")::Core.Const(QuadGK.var"#quadgk#49")
β”‚   %19 = Core.tuple(atol, rtol, maxevals::Core.Const(10000000), order::Core.Const(7), norm, segbuf, eval_segbuf, #self#, f)::Core.Const((nothing, nothing, 10000000, 7, LinearAlgebra.norm, nothing, nothing, QuadGK.quadgk, f))
β”‚   %20 = Core._apply_iterate(Base.iterate, %18, %19, segs)::Tuple{Float64, Float64}
└──       return %20
1 Like

Hi, I agree that nested calls to quadgk are in general not the best solution for the numerical solution of nested integrals. However, I believe that there are some specific use cases where this strategy can be very useful.

I benchmarked the various solutions discussed in this thread to see their effect in terms of performance.

julia> using BenchmarkTools, QuadGK

julia> f(x) = quadgk(y -> x*y, 0., 1.)[1]; @btime quadgk(f, 0.,1.)
  470.026 ns (9 allocations: 304 bytes)
(0.25, 0.0)

julia> f(x) = quadgk(y -> x*y, 0., 1.)[1]::typeof(x *1.); @btime quadgk(f, 0.,1.)
  337.729 ns (5 allocations: 176 bytes)
(0.25, 0.0)

julia> f(x) = quadgk(y -> x*y, 0., 1.)[1]; f(0.2); @btime quadgk(f, 0.,1.)
  256.564 ns (0 allocations: 0 bytes)
(0.25, 0.0)

It seems that adding type assertion (which yields type stability) helps reducing computation time and allocation. However, there are still quite a few allocations. The solution with intermediate function call (which is also type stable) results in lower computation time and 0 allocation.

Once again, it is not completely clear to me what is happening here. Any insight ?

When you call f before calling the outer quadgk you force compilation of f, which is then likely reused as-is (not inlined) when compiling the outer quadgk call. When you skip this, you’re compiling the whole nested integral in one go, end-to-end, which may result in slightly different choices for which functions to inline. This may affect performance and, occasionally, type inference. Julia’s compiler is not completely free of hysteresis (history dependence), particularly in the presence of recursion (e.g., quadgk calling itself).

1 Like

A more efficient implementation of nested integrals can be found in GitHub - lxvm/IteratedIntegration.jl: Iterated h-adaptive integration (IAI) by @lxvm β€” internally, it is built on top of QuadGK, but the key improvement is that it is globally adaptive (in the sense that the refinement of different 1d integrals are coupled).

2 Likes

I don’t see why an assertion is needed here. Julia correctly infers the output type of f(x) even without the assertion:

julia> using QuadGK, Test

julia> f(x) = quadgk(y -> x*y, 0., 1.)[1]
f (generic function with 1 method)

julia> @inferred f(3.0)  # throws if output type is not inferred
1.5

If it fails to infer the output type when f(x) is called from within another quadgk call, that seems like it could be a bug in Julia’s type inference.

I think the issue is what I suggested above: when compiling quadgk(f, 0.0, 1.0) without having previously compiled f, inlining ends up working out such that f is not its own compilation unit, and unfortunately in a way such that type inference fails somewhere.

Hello, It does not infer correctly if there is a nested call to quadgk.

julia> using QuadGK, Test

julia> f(x) = quadgk(y -> x*y, 0., 1.)[1]
f (generic function with 1 method)

julia> @inferred quadgk(f, 0.,1.)
ERROR: return type Tuple{Float64, Float64} does not match inferred return type Tuple{Any, Any}
Stacktrace:
 [1] error(s::String)
   @ Base ./error.jl:35
 [2] top-level scope
   @ REPL[273]:1

Seems like a bug in Julia if the same function fails to infer only when it is nested, maybe report it as a julia issue.

1 Like

Pretty sure this is a known issue with recursive calls, hang on while I look up the github issue

1 Like

(I would call this a re-entrant call, not a recursive call. The call tree is entirely known statically here, and neither quadgk nor f are themselves recursive. So it should be simpler than inferring a recursive function where the call tree is determined at runtime.)

2 Likes