# Higher accuracy in numerical cumulative integration

There are some threads about numerical cumulative integration like this and this. However, I have a more basic question.

I am interested in performing the integral S(x; a) = \int_a^x f(x') dx' where x \le b. Note that f(x) is always positive so S(x;a) is monotonically increasing. f(x) has the form f(x) = \sqrt{a_1 - a_2 x^4 + a_3 x^2}.

After I have S(x; a), I want to find x(S; a), namely, I want to invert it.
I understand that I can do the integral numerically for many values of x, and then solve the equation S(x; a) = x_0 to build x(S; a).

Another complication is that the integration limits a, b actually are a function of some other parameter c, but I am guaranteed that S(b(c); a(c)) = \int_{a(c)}^{b(c)} f(x') dx' is always the same value. I mean the total integral is a constant.
That means that I have a bunch of functions S(x; c) that are monotonically increasing from 0 to some fixed value S_0 = S(b(c); a(c)) .
And now I want to invert it.

I am not sure what is the best way to get high accuracy.
I tried the idea by @stevengj that I read here and that is implemented here by @tobydriscoll. I mean, I used ApproxFun to approximate the function f(x), and then to integrate it.

a = 1
b = 2
f_approx = Fun(x->f(x), a..b)
Stmp = integrate(f_approx)
S = Stmp - Stmp(a)


Then I used Newton’s Method to invert the function S(x; a) to x(S;a).
The problem is that for some values of c, I find that the total integral S(b(c); a(c)) = \int_{a(c)}^{b(c)} f(x') dx' gives a smaller value than my initially calculated S_0. This means that I cannot find the root.

Are there any ideas on how to improve this calculation?
Maybe I should use quadgk to calculate the integral instead of ApproxFun?

So if you know S_0, why not solve

{dx \over dS}={1 \over f(x)} ;
x(0)=a

by integrating between 0 and S_0 and check that at the endpoint you have x(S_0)=b. If not, increase the accuracy requirement of the integrator.
Maybe not the most efficient way, but you could in this way enforce your integral constraint and have the solution x(S) .

1 Like

A basic difficulty here may be that your function has a square root singularity, which may make it harder to numerically integrate accurately unless you build this singularity into the quadrature scheme. Does the argument of your square root ever go through zero in your integration interval? (Of course, that would make the integrand complex.)

1 Like

Yes, I see that is a difficulty.

The argument of the sqrt stays positive or zero always. Though I am interested in values of x that make f(x) vanish. I guess that this is a problem, but I would like to still have better accuracy.

That’s an interesting suggestion! I will try to implement that.

Places where it hits zero are still a problem for convergence rates. At the very least, you should break the integral into subintervals where such roots occur. Then an adaptive scheme like QuadGK will have an easier time refining the integral near the singularity. (And I would tend to favor an h-adaptive scheme like QuadGK over a p-adaptive scheme like ApproxFun unless you build the nature of the singularity into the function approximations, e.g. by using Gauss–Jacobi quadrature as I suggested in another thread.)

2 Likes

I think in the OPs case (function f stays positive or zero), the zero can only happen at x=a or x=b, otherwise you would have negative arguments for the square root within the domain (I assume a_2 is positive) ?

True!

Thanks!!

So in my case, due to this singularity, I want to use QuadGK and not the ApproxFun method.

So I read your post there – it was very good. Unfortunately, for me, I did not see that my type of singularity

is included in these methods, due to the powers of x.

So I guess that the best option is to just use quadgk trying the best accuracy I can get.

You might also try symbolic integration, Mathematica finds the expression for the indefinite integral and maybe with your specific constants you can do the work analytically.

1 Like

It is included in these methods. What matters is not the exact form of the integrand, but the form of the singularity. In particular, suppose that the argument of the square root goes to zero at a the lower endpoint x=a. In this case, the integrand is of the form
\sqrt{p(x) (x - a)} = g(x) \sqrt{x - a} where p(x) is a (cubic) polynomial) and g(x) = \sqrt{p(x)} is non-singular at the endpoint. That is, the singularity is only in the \sqrt{x - a} weight, which is handled by Gauss–Jacobi quadrature.

1 Like

I tried that. Mathematica indeed gives an analytic expression (with elliptic integral functions), but for some reason, it comes out complex (even when I put all the proper assumptions). Then, in the singular points, namely x_0 such that f(x_0)=0, I get that the integral is evaluated to be infinity. So I need to understand first why this comes out as infinity.

@stevengj Perfect explanation! thanks!
So I guess that this means I need to partition my segment to two to handle the right and left sides separately.

I will try that.

It’s really easy in ApproxFun to factor out a sqrt singularity using JacobiWeight

2 Likes

Thanks @dlfivefifty !
Could you maybe point me to an example that shows how to use the weights in ApproxFun?

julia> f = Fun(x -> sqrt(x+1)*cos(x), JacobiWeight(0.5,0));

julia> F = cumsum(f)
Fun((1+x)^1.5[Chebyshev()], [0.527618, 0.104918, -0.0649745, -0.00159742, 0.000894048, 1.02892e-5, -5.55621e-6, -3.72868e-8, 1.97678e-8, 8.70474e-11, -4.56323e-11, -1.42217e-13, 7.40716e-14])

julia> F(0.1)
0.6957793344826042

4 Likes