Looking for a quadrature rule

Hi,

I need compute thousands of integrals of the form

\int_{0}^{t} \frac{e^{at}}{\sum_{i=1}^n b_i e^{c_i t}} dt.

I know that :

  • the time frame [0,t] is typically small (say t \le 1)
  • rates a,c_1,...c_n are of order 1e-4, 1e-5
  • weights b_1,...,b_n are of order 1.

For the moment, I use the very dumb but fast forward scheme

\int_0^t f(s)ds = t * f(0).

Is there something smarted that i could do without blowing up the computational cost ?

The expression in your post you can integrate exactly. I expect there is a missing t in the denominator?

Yep there was, just fixed it. I am having trouble showing the integral centered i do not know why.

How fast and accurate do you want your solution to be? Would a low order Gauss-Legendre rule work?

2 Likes

Maybe, thanks for the link I’ll try it.

I do not need too much accuracy, I think that an error of order 0.001 would be largely good enough. I am more looking around that certain of what i want here…

If the rates are low and the time domain is small, you can also do a Taylor expansion of the exponentials, and integrate the result exactly.

1 Like

Taylor expensions of the exponentials would give me a rational function right ? Rational functions are integrated exactly ?

Yes. So if I’m not mistaken, using a first order expansion, the integrand
becomes

I = \int_0^t\frac{1+at'}{B+Dt'}dt'

where B=\sum_i b_i and D = \sum_i b_i c_i.
This can be integrated exactly, yielding

I = \frac{{D} \left(a t+\log \left(\frac{{D} t}{B}+1\right)\right)+a B \log \left(\frac{B}{B+\text{D} t}\right)}{{D}^2}

1 Like

Wow. Indeed this looks a bit stronger than my zeroth order approx. Could the same thing be done for second order too ?

Edit: WolframAlpha gives

and

But this is starting to be a bit messy ^^

I’ll try to implement the first order one to see how it behaves. If it is not sucessfull enough, I’ll try a low-order quadrature as you pointed out. Thanks :slight_smile:

1 Like

EDIT: ah I see you beat me to it. :slight_smile:

Yes, but the integral becomes more hairy. Mathematica says
I\approx \frac{\frac{2 \left(a^2 \left(D^2-B E\right)-a D E+E^2\right) \tan ^{-1}\left(\frac{D+E \text{t}}{\sqrt{2 B E-D^2}}\right)}{\sqrt{2 B E-D^2}}-\frac{2 \left(a^2 \left(D^2-B E\right)-a D E+E^2\right) \tan ^{-1}\left(\frac{D}{\sqrt{2 B E-D^2}}\right)}{\sqrt{2 B E-D^2}}+a^2 E \text{t}+a (E-a D) \log \left(2 B+2 D \text{t}+E \text{t}^2\right)+a \log (B) (a D-E)+a \log (2) (a D-E)}{E^2},
where E = \sum_i b_ic_i^2.

you would probably be better off doing a Taylor expansion of the whole integrand at this point.

Maybe indeed I should taylor expand the whole integrand. Could your Mathematica do this for me ? Wolframalpha does not want too :stuck_out_tongue:

Or maybe I should go with Symbolics.jl + TaylorDiff.jl

So numerically we are solving the problem with the form f_t=I(f,t) within t\in [0,t']?
Then you may check OrdinaryDiffEq.jl: ODE solvers and utilities · OrdinaryDiffEq.jl.
In terms of efficiency, to achieve the same accuracy, higher-order integrators are usually cheaper than the forward Euler scheme.

Yes but the integrand is piecewise defined and has a different shape on every time step, which is why i ended up making the scheme manually.

I am more looking for a better approximation than t * f(0) than for a good approximation: I do not want the runtime to explode.

Symbolics.jl can do all of this as well, but here you go

2 Likes

Using ForwardDiff, I could directly do it with the following:

\int_0^t f(s) ds = \sum_{k=0}^\infty f^{(k)}(0) \frac{t^{k+1}}{(k+1)!} \approx tf(0) + \frac{t^2}{2} f^{(1)}(0)

which would probably be numerically efficient, and gives me one more order than the previous tf(0) approximation.

1 Like

Note that if you are using this as a way of solving a differential equation, the scheme may become unstable (just like forward Euler does). If so, trying a midpoint or backward rule may work better.

1 Like

Hello,
I think most of the computation time for a such a problem lies in the denominator of the function. If a possible approximation works for the denominator doing thousands of integrals should not be a problem. But I do not know the degree of accuracy needed.

Here is an approximation tested with some typical values of a, b, and c:

using QuadGK, BenchmarkTools

a = 0.0003
b = [1.0, 1.5, 2.3, 0.6]
c = 1.e-4 * b

function exact_integrand(t, a, b, c)
    return exp(a*t) / sum(b[i] * exp(c[i]*t) for i in eachindex(b,c))
end

approx_exp(x) = 1 + x 

function approx_integrand(t, a, b, c)
    approx_exp(a*t) / sum(b[i] * approx_exp(c[i]*t) for i in eachindex(b,c))
end

If we plot the exact and approximate integrands we see the following:
integrand
The approximation looks very good and both traces appear to be simple linear functions. So we can try a one-point quadrature rule that samples the integrand at the center of the integration interval:

ts = 0:0.01:1
ints_exact = [quadgk(t -> exact_integrand(t,a,b,c), 0, t)[1] for t in ts]
ints_approx = [t * approx_integrand(t/2,a,b,c) for t in ts]
@show maximum(abs, (ints_exact - ints_approx))

which produces

maximum(abs, ints_exact - ints_approx) = 1.4760460631535466e-9

The timing for the approximate and exact integrands:

julia> @btime approx_integrand(0.95, $a, $b, $c)
  7.800 ns (0 allocations: 0 bytes)
0.1852089640409081

julia> @btime exact_integrand(0.95, $a, $b, $c)
  31.194 ns (0 allocations: 0 bytes)
0.1852089689678891
1 Like