Approximate inverse of a definite integral

For a given (fixed) function f: [0,1] \to [0, \infty), let

g(x) = \int_0^x f(x) dx

and

h(x) = \frac{g(x)}{g(1)}

h is then, by construction, [0,1] \to [0,1]. I need a fast approximation of its inverse: I am OK with investing a bit in constructing a good approximation to h^{-1}, because it gets called tens of thousands of times. It is “smooth”: no kinks, discontinuities. Orthogonal polynomials would do fine (eg Chebyshev).

The problem is calculating it smartly — eg if I have a grid of Chebyshev nodes mapped to [0,1]. I thought of formulating the integral as an ODE, and then using a continuous callback when it crosses a node to save the position. But that requires a first pass to calculate g(1).

Maybe I am overthinking this. Any suggestion is appreciated.

My first thought is that if you are indeed calling h or h^{-1} “tens of thousands” of times then computing g(1) once should not be problematic.

In the same way you might use Chebyshev regression to approximate a function, now approximate the inverse instead. So compute (x,h(x)) where x is “dense enough” in [0,1] but then interpolate as (h(x),x) – so that the x values are now the LHS of your interpolant.

2 Likes

I would just use Newton’s method here to compute h^{-1} from a quadrature routine for h(x). Something like:

using QuadGK
const g₁ = quadgk(f, 0, 1; rtol=1e-12)[1]  # compute g(1) accurately
g(x; atol=1e-9 * g₁) = quadgk(f, 0, x; atol=atol)[1]
h(x; atol=1e-9) = g(x; atol = atol * g₁) / g₁
h′(x) = f(x) / g₁    # the derivative, using the fundamental theorem of calculus
function h⁻¹(y; atol=1e-9)
    x = 0.5 # arbitrary initial guess
    while true
        δx = (h(x; atol) - y) / h′(x)
        x -= δx
        abs(δx) ≤ atol && return x
    end
end

which gives, e.g. for f(x) = x^3 + 3:

julia> y = h(0.3)
0.2775461538461539

julia> h⁻¹(y)
0.30000000000000004

You can then plug h⁻¹ into e.g. FastChebInterp or ApproxFun to form an efficient Chebyshev interpolant rather than applying Newton’s method every time you want to evaluate it.

(If f is extremely expensive, you might want to first replace it by a Chebyshev interpolant too, before using the interpolant to construct h and h^{-1}. Of course, once you have the interpolant, you don’t need QuadGK, e.g. with ApproxFun using cumsum will form the integral function directly. You still probably want Newton’s method to invert it.)

In general, you almost never want to evaluate definite integrals by solving ODEs — specialized integration routines are much more efficient (and simpler to use), because they exploit the fact that you have “random access” to your integrand values at any point in the domain (rather than having to build it up from “left to right” as in a general ODE).

8 Likes

If you “precompute” stuff only once, you might want to precompute them in BigFloat format to achieve maximum accuracy in later use of the approximator.