Solve for upper/lower bound of a numerical definite integral

Hi, I’m trying to solve for b, given f, a and k, please help, Is there a way to do it in julia?

quadgk(f,a,b)=k

Thanks

Possibly https://github.com/JuliaMath/Roots.jl

2 Likes

There’s always a Newton iteration:

b = some_initial_guess
tol = sqrt(eps())
while true
    δb = (quadgk(f,a,b, rtol=tol*0.1)[1] - k) / f(b)
    b -= δb
    abs(δb) ≤ tol * abs(b) && break
end

Note that I’m using the fact that \frac{d}{db} \int_a^b f(x) dx = f(b).

10 Likes

@stevengj, this is excellent.

Below a summary of different solutions, including yours and your advice to run Optim properly (even if not recommended for this problem kept it for sake of completeness).


f(x) = exp(x)*sin(x)

# forward modelling
a = 0
# b = π   : is UNKNOWN to be found given f, a and k
k = 0.5*(1.0 + exp(1)^π)   # k = 12.0703463163896

# SOLUTION-1: Newton solution by @stevengj
using QuadGK

b = 1.0  # initial guess
tol = sqrt(eps())
while true
    δb = (quadgk(f,a,b, rtol=tol*0.1)[1] - k) / f(b)
    b -= δb
    abs(δb) ≤ tol * abs(b) && break
end
println("Solution = ", b)   # 3.14159268 / solution π= 3.14159265


# SOLUTION-2: use Optim to minimizing g(x)^2  
using Optim

g(x) = (quadgk.(f,a,x...))[1] - k
h(x) = ((g.(x)).^2)[1]
dh(x) = (2*g.(x).*f.(x))[1]   # derivative

# Checks:
g(pi) ≈ 0   # true
h(pi) ≈ 0   # true
dh(pi) ≈ 0   # true

b = [1.0]  # initial value  / solution π= 3.14159265
result = optimize(h, b, Newton(), Optim.Options(g_tol = 1e-16))
Optim.minimizer(result)  # 3.14159265
result = optimize(h, dh, b, Newton(); inplace=false)  # using derivative
Optim.minimizer(result)  # 3.1415795   # does not improve accuracy


# SOLUTION-3: use NLsolve to find roots

using NLsolve

g(x) = (quadgk.(f,a,x)...)[1] - k
dg(x) = f.(x)   # derivative

# Check:
g([pi]) ≈ 0   # true

b = [1.0]  # initial value  / solution π= 3.14159265
nlsolve(g, b; ftol=1e-16)      # 3.14159264
nlsolve(g, dg, b; ftol=1e-16)  # 3.14159263  # same accuracy...


# SOLUTION-4: use DifferentialEquations and interpolation

using DifferentialEquations, Plots, Dierckx

u0 = 0;  tspan = (a, 4.)   # initial conditions & timespan
du(u,p,t) = f(t)   # define the system 
prob = ODEProblem(du, u0, tspan)  # define the problem
sol0 = solve(prob, dtmax=0.01)  # solve it

i = argmin(abs.(sol0.u .- k))  # index of time-step the closest to solution
tsol = LinRange(sol0.t[i-1], sol0.t[i+1],20)   # spline it around the solution
spl = Spline1D(tsol, sol0.(tsol) .- k)
b = mean(roots(spl))    # 3.1415926536
1 Like

Yes; it looks like the problem with your code is that Optim.optimize is expecting a vector of initial guesses, not a scalar. Try passing [b] instead of b.

However, this is a root-finding problem g(b) = 0. It is almost always a mistake to try to solve a root-finding problem by using an optimization algorithm to minimize |g|². You should use a root-finding algorithm like those in NLsolve.jl.

Moreover, in this case you know the derivative analytically, as I mentioned above, in which case you should certainly provide it to the root-finding algorithm.

However, for a 1d root-finding problem (b is a scalar) with an analytically known derivative, all of the generic root-finding packages will basically boil down to a Newton iteration like the one I wrote. The clever algorithms are mainly for the case where you don’t know the derivative, or have lots of derivatives (a big Jacobian) and can’t affort to compute them all or to invert the Jacobian.

Alternatively, in 1d, especially for smooth functions, there are often more clever algorithms available. For example, you could use ApproxFun.jl to construct a high-accuracy polynomial approximation of your integrand f, call cumsum to compute its integral, and use the ApproxFun.roots function to find all of the places where the integral equals k.

7 Likes

Rather than minimize |g|², one could apply g(b)=0 as an equality constraint to an optimization algorithm that supports such constraints, e.g. NLopt.jl or JuMP.jl. I often find it convenient to deal with roots the same way I deal with optimization. Sometimes you can leave out an objective function altogether, or just specify a constant like 0, to be “minimized” subject to the constraints (to some tolerance). However, I’ve never compared performance to a root finder.

2 Likes

@stevengj, thank you for your top quality advice.

Tried to follow it the best possible and have updated code above for 3 different methods: Newton by yourself, Optim and NLsolve.
Including the derivative did not seem to help much, but any blame should surely be on me. BR

*PS: code above now runs but better not shaking it too much…

1 Like

Yes, solving the optimization problem \min_x 1 subject to the constraint g(x)=0 is, of course, exactly equivalent to the root-finding problem g(x)=0, and a sufficiently clever NLP algorithm will perform steps essentially equivalent to Newton steps (perhaps limited by a trust region). They almost certainly aren’t going to be better than Newton steps, however, and in many cases will be worse (because they may spend a lot of effort doing computational steps that are only needed for more general NLPs).

In a single-variable problem where the derivative is known, if you aren’t going to do something more clever (e.g. Chebyshev fits, complex analysis, continued fractions, etc.) you might as well just use Newton. (Either implementing it yourself — it’s just a few lines — or calling a canned implementation like the one in Roots.jl).

6 Likes

There is another way. Define R as

$$\int_a^{R(x)}f=x$$

you get

$$\frac{d}{dx}R(x) = \frac{1}{f(R(x))},\quad R(a)=0 $$

You can use an ODE solver to get b=R(k). This is the main trick of my paper on PDMP.

@rveltz, it is only me, or your Latex is not displaying properly?

3 Likes

It gives:

Screen Shot 2021-03-30 at 11.18.35

1 Like

@rveltz, mathematically this looks pretty much like @stevengj’s solution above. The difference is that the differential equations solver will not use Newton’s method?
Would like to try this when time allows, but would be surprised if it beats Newton in terms of accuracy and fast convergence.

I would say it is better in that you dont need to recompute \int_a^bf if you know that the solution b_{sol} is greater (smaller) than b. You “waste” less computation

1 Like

Yeah, it seems to me that there is a lot of redundancy in re-computing the full integral at each step. It should be enough to compute the integral only over the ‘correction interval’. (One should probably take some care with accumulated errors, however.)

2 Likes

@rveltz, in your nice solution how do you handle the cases with singularities where the function f() has zeros and 1/f() explodes to ?

Hum, I dont know! I guess the Newton solver will sufer as well. I’d say you have to use a stiff ODE solver that can handle finite time explosion.

1 Like

The topic “Solve equation” doesn’t do justice to the interesting things discussed here. @carloslesmes can I suggest editing the topic title to something more specific? For example, “Solve for upper/lower bound of a numerical definite integral” or some such.

1 Like

Am I looking at this crazy, or wouldn’t the correct boundary condition be R(0) = a instead of R(a) = 0?

1 Like

nope you are right! Thank you

Newton did not suffer for f(x)=exp(x)sin(x) in the example above with a zero at the solution pi. However, someone else is suffering to have your method working for that f(x):sweat: