Discrete inverse Laplace transform (constrains and regularisation)

I want to conduct an Inverse Laplace Transform (ILT) on sampled data.
I am aware of InverseLaplace.jl, but sadly it doesn’t work well on discrete (sampled) data due to a missing interface and regularization. So I decided to give my terribly limited skills in numerics a try and do this on my own (I appreciate any advice).

The problem is that I want to solve

y(t) = \int_0^\infty F(s)e^{-ts}~\mathrm{d}s

for F(s). A discetized version wold be:

y_i(t_i) = \sum_{j=1}^{N-1} F(s_j)e^{-t_is_j}~\Delta s_j \quad j\in [1,N]

with \Delta s_j = s_{j+1}-s_j. So that we can write this as matrix equation

y = KF \quad K_{ij}= \Delta s_j e^{-t_is_j}

And I can append a regularization to the columns of K an y:

\hat{K}= \begin{pmatrix} K\\ \alpha I \end{pmatrix} \hat{y}= \begin{pmatrix} y\\ 0 \end{pmatrix}

and solve this by a simple s=K\y
This is what I came up with so far:

#Regularizet inverse laplace transform

function rilt(t, y, smin, smax, N;α=1)

    # Log spaced sampling
    s = smin * (smax / smin).^range(0, 1, length=N + 1)
    ds = diff(s)
    # central differences (input noise will make this actually unimportant)
    sc = (s[1:end - 1] + s[2:end]) / 2

    # zero rate constant for y-offset in input data
    push!(ds, 1)
    push!(sc, 0)

    # kernel
    a(i, j) = exp(-t[i] * sc[j]) * ds[j]

    # convert a(i,j) to  matrix
    A = [a(i, j) for i = eachindex(t), j = eachindex(sc)]

    # add rows for regularization 
    L= Matrix{Float64}(I(length(sc)))
    AR = vcat(A, α * L)

    # add  entry to y to store the regularization 
    yr = vcat(y, zeros(size(L,1 )))

    # solve
    sy = (AR \ yr)

    (s[1:N], sy[1:N]) 
end

A short test shows that it actually works somehow:

using QuadGK
using Plots

dl(s;d=1,σ=1)=  1/sqrt(2*π*σ)*exp(-(s-d)^2/(2*σ^2))
l(t;d=1,σ=1) = quadgk(s-> (dl(s;d=2*d,σ=σ)+dl(s;d=d,σ=σ))*exp(-t*s),0,Inf,rtol=1e-6 )[1] 

#generate test data:
t=0:0.1:10
y=l.(t;d=1,σ=0.1)
noise = rand(length(y)) * 0.0001

plot(s,dl.(s;d=1,σ=0.1)+dl.(s;d=2,σ=0.1),label="truth",xaxis=:log)
plot!(rilt(t, y, 0.01, 100, 250, α=0)...,label="ILT w/o noise",xlabel="s")
plot!(rilt(t, y .+ noise, 0.01, 100, 250 , α=2e-5)...,xaxis=:log,label="ILT w noise + reg")

reg

tldr;
Now the actual questions (probably not the last ones):

I’m pretty sure there are a lot of publications out there to my questions, but I have no overview in this area and am often completely lost in the face of all the topic related vocabulary .

Surely I would also be happy to be pointed to a package that does all this for me, and what I have overlooked so far… thanks so far…

4 Likes

It would be interesting to see how your results compare to InverseLaplace.jl, and eventually submit PR for package enhancement.

Fyi, the mpmath Python library by Fredrik Johansson and others, implements inverselaplace.py code for different techniques and provides full references.

I do not know if it is alright to translate any code from there to Julia fully acknowledging the authors, though.

1 Like

It would be interesting to see how your results compare to InverseLaplace.jl, and eventually submit PR for package enhancement.

As expected the results depend on the accuracy of QuadGK.jl (and in case of using InverseLaplace.Talbot QuadGK errors)

ftGWR = GWR(s -> l(s;d=1,σ=0.1));
ftWeeks = Weeks(s -> l(s;d=1,σ=0.1));
plot!(s,ftGWR.(s),label="GWR w/o noise")
plot!(s,ftWeeks.(s),label="Weeks w/o noise")

reg_comp

However InverseLaplace does a perfect job when analytical expressions are available

s=10.0 .^(-8:0.1:10)
F(s)= exp(-s)*cos(s)
t=10.0 .^(-5:0.01:10)
f(t)=(t+1)/((t+1)^2+1)

ftGWR = GWR(s -> f(s));
ftWeeks = Weeks(s -> f(s));
ftTalbot = Talbot(s ->f(s));


plot(t,f.(t),yaxis=:log)
plot(s,F.(s),label="truth",xaxis=:log,ylims=(-0.5,2))
plot!(rilt(t,f.(t),1e-8,1e5,300,α=3e-8);label="Discrete regularized")
plot!(s,ftGWR.(s),label="InverseLaplacce.GWR")
plot!(s,ftTalbot.(s),label="InverseLaplace.Talbot")
plot!(s,ftWeeks.(s),label="InverseLaplace.Weeks")

reg_comp_an

So It really depends on the use-case which method fits best.

1 Like

The Weeks method in the InverseLaplace.jl package can perform better than shown in your first example, by selecting the number of terms Nterms and better input parameters sigma and b:

using QuadGK, InverseLaplace, Plots;gr()

Fs(s;d=1,σ=1)=  1/sqrt(2*π*σ)*exp(-(s-d)^2/(2*σ^2))
yt(t;d=1,σ=1) = quadgk(s-> (Fs(s;d=2*d,σ=σ)+Fs(s;d=d,σ=σ))*exp(-t*s),0,Inf)[1] 

s = exp.(LinRange(log(1e-2),log(100),1000))
FsWeeks = Weeks(s -> yt(s;d=1,σ=0.1), 100, 0.1, 10);  # from InverseLaplace.jl
plot(s,FsWeeks.(s),label="Weeks (InverseLaplace.jl)",ylim=(-0.5,2),xaxis=:log,lc=:red,lw=2)
plot!(s,Fs.(s;d=2,σ=0.1) + Fs.(s;d=1,σ=0.1), label="Truth",lc=:black,ls=:dash,xlabel="s")

InverseLaplace_Weeks

PS: what seems a bit confusing in your problem is that the usual t and s variables in the Laplace transform definition, are swapped.

2 Likes

Oh… really should have RTFM… weeks is indeed very powerful an even with quadgk(...;rtol=0.01) it can still do it. But is there any chance to apply this on discrete data? Tried it with Interpolations.jl but then everything explodes.

PS: what seems a bit confusing in your problem is that the usual t and s variables in the Laplace transform definition, are swapped.

This is exactly what I think as a physicist when confronted with the formal definition. For me, the Laplace transform calculates the decay as a function of time t from a spectrum of rates s. I want to know the spectra because the decay is what I am measuring. can you enlighten me why it is written the other way around everywhere.

If you are working with multiple exponential decays, with decay rates s and amplitudes F(s), then your integral over s makes perfect sense to sum all contributions. However, this is not the case for oscillatory problems where a connection is made between the Laplace and Fourier transforms.

This reference (Istratov and Vyvenko, 1999) covers your problem in detail and it advises against using an inverse Laplace transform approach. The solution does not seem to be easy, quote: “it may not be unique, may not exist and may not depend continuously on the data.

1 Like

Thanks for this very good read. but now I feel like that I have to defend my self a bit.

it may not be unique, may not exist and may not depend continuously on the data.

does not apply to my specific task.

What I exactly do is, that I’m simulating voltage decay transients of a special kind of semiconductor diodes by numerically solving a set of PDE’s. Now I want to compare the results of the simulations with real measurements. And when it comes down to the real experiment I fully support that it is, politely said, difficult do exponential analysis due to a SNR < 10 under the relevant conditions. However from my simulations this is somehow different because SNR is only limited by the computational power I can throw against my equations. The technical problem is, that the solutions are only available on discrete timestamps and interpolation leads to the mentioned problems.

[…] it advises against using an inverse Laplace transform approach

But the reason why they do this is because of:
quote: “The programs CONTIN and FTIKREG, mentioned above, contain about 5000 lines of the FORTRAN code. It is a very time-consuming task to write such a program, and we strongly recommend using one of the available programs rather than to write it oneself”
Which is ok but I want to understand things… and do it in julia :slight_smile:
and they close the section with “that two exponentials with with \tau_1/\tau_2=5 can be distinguished for SNR =15”

PS: *just to add up to the reference you gave me: What is even more problematic then fitting an arbitrary number of exponentials to an noisy decay is that often the underlying mechanism has actually no strictly exponential dynamics e.g. :\frac{\mathrm{d}n}{\mathrm{dt}} \propto n^2
However I could name uncountable peer-reviewed articles not taking this circumstance into account and still fitting 2 or 3 exponentials to such a decay. *

But again the review you gave me made it in my list of “read first than ask” articles

Turns out that Ipopt.jl does the trick. This way I could apply the constrains via the JuMP interface, whenn additionally using the second derivative operator for regularization it actually works pretty well on discrete data even with lots of noise:reg_constrained :

using LinearAlgebra
using JuMP, Ipopt 
using QuadGK
using Plots

function rilt(t, y, smin, smax, N;α=1)

    # Log spaced sampling
    s = smin * (smax / smin).^range(0, 1, length=N + 1)
    ds = diff(s)
    # central differences (input noise will make this actually unimportant)
    sc = (s[1:end - 1] + s[2:end]) / 2

    # zero rate constant for y-offset in input data
    push!(ds, 1)
    push!(sc, 0)

    # kernel
    a(i, j) = exp(-t[i] * sc[j]) * ds[j]

    # convert a(i,j) to  matrix
    A = [a(i, j) for i = eachindex(t), j = eachindex(sc)]

    # add rows for regularization 
        # Identity
        # L= Matrix{Float32}(I(length(sc)))   
    # Second derivative regularization with smooth transitions to 0 at the edges
    L= zeros(N+2,N+1)   
    δ(i,j)= i==j ? 1 : 0    
    L= [δ(i,j)/ds[j]^2+δ(i,j+2)/ds[j]^2-2*δ(i+1,j)/ds[j]^2 for i=1:N+2, j=1:N+1]
    L[:,end] .=0  #Do not regularize on the y offset   
    
    AR = vcat(A, α * L)
    # add  entry to y to store the regularization 
    yr = vcat(y, zeros(size(L,1 )))

    model = Model(Ipopt.Optimizer)
    set_silent(model)
    @variable(model,x[1:N+1])
    @constraint(model, [i=1:N], x[i] >=0);
    @objective(model, Min, sum((AR*x-yr).^2))
    optimize!(model)
    return (sc[1:N],value.(x)[1:N])
end

dl(s;d=1,σ=1)=  1/sqrt(2*π*σ)*exp(-(s-d)^2/(2*σ^2))
l(t;d=1,σ=1) = quadgk(s-> (dl(s;d=4*d,σ=σ)+dl(s;d=d,σ=σ))*exp(-t*s),0,Inf,rtol=1e-8  )[1] 

#generate test data:
t=0:0.01:10
t=10.0 .^(-4:0.01:2)
y=l.(t;d=1,σ=0.1)

noise = (rand(length(y)) .-0.5) * 0.01 
s=10.0 .^(-1:0.001:1)
plot(s,dl.(s;d=1,σ=0.1)+dl.(s;d=4,σ=0.1),label="truth",xaxis=:log,ylims=(-0.2,2))
plot!(rilt(t, y, 0.1,   10, 200, α=2e-7)...,label="ILT w/o noise",xlabel="s")
plot!(rilt(t, y .+ noise.+1.1, 0.1, 10, 200 , α=2e-7)...,xaxis=:log,label="ILT w noise ")

The regularization parameter still needs to be set manually, but it shouldn’t be too hard to automate this
@rafael.guerra: If you don’t mind pushing all these extra dependencies into your package, I would try prepare a pull request with all this and more neatly implemented.

1 Like

@MatFi, this looks excellent.

You might be interested in looking also at the following slides that use the same approach for the Tikhonov regularization of the Inverse Laplace Transform.

PS: please note that I do not have any package.