I would just use Newton’s method here to compute h^{-1} from a quadrature routine for h(x). Something like:
using QuadGK
const g₁ = quadgk(f, 0, 1; rtol=1e-12)[1] # compute g(1) accurately
g(x; atol=1e-9 * g₁) = quadgk(f, 0, x; atol=atol)[1]
h(x; atol=1e-9) = g(x; atol = atol * g₁) / g₁
h′(x) = f(x) / g₁ # the derivative, using the fundamental theorem of calculus
function h⁻¹(y; atol=1e-9)
x = 0.5 # arbitrary initial guess
while true
δx = (h(x; atol) - y) / h′(x)
x -= δx
abs(δx) ≤ atol && return x
end
end
which gives, e.g. for f(x) = x^3 + 3
:
julia> y = h(0.3)
0.2775461538461539
julia> h⁻¹(y)
0.30000000000000004
You can then plug h⁻¹
into e.g. FastChebInterp or ApproxFun to form an efficient Chebyshev interpolant rather than applying Newton’s method every time you want to evaluate it.
(If f is extremely expensive, you might want to first replace it by a Chebyshev interpolant too, before using the interpolant to construct h and h^{-1}. Of course, once you have the interpolant, you don’t need QuadGK, e.g. with ApproxFun using cumsum
will form the integral function directly. You still probably want Newton’s method to invert it.)
In general, you almost never want to evaluate definite integrals by solving ODEs — specialized integration routines are much more efficient (and simpler to use), because they exploit the fact that you have “random access” to your integrand values at any point in the domain (rather than having to build it up from “left to right” as in a general ODE).