Why is this scenario of function parameter estimation acting strangely? Using LsqFit

FYI for the other definitions of model (commented out), I’ve tested them with the same pair of parameters.

using Plots
using LsqFit

# Define expected curve behaviour
# @. model(t, p) = p[2]*exp(p[1]*t) # works
# @. model(t, p) = exp(p[1]*t)*sin(p[2]*t) # works
@. model(t, p) = p[1]*t*sin(p[2]*t) # doesn't work

# Mock data
t = 0:0.1:10
p₀ = [-1.2, 3.5]
data = model(t, p₀)

# Find parameters
lb = [-2., 2.5]
ub = [-0.1, 4.]
p0_bounds = [-1., 3.]
fit = curve_fit(model, t, data, p0)

# Plot result
scatter(t, data)
plot!(t, model(t, coef(fit)))

@show stderror(fit)

well seems to be related to this thread. Note that Levenberg Marquardt is not the robustest Method for such kind of problems (multiple local minima).

Optim.jl provides a simple SA-Optimizer which tend to be more robust for this task

using Optim

loss(u) = sum((data .- model(t,u)).^2)
res =  optimize(loss, p0_bounds, SimulatedAnnealing())

# Plot result
scatter(t, data)
plot!(t, model(t, Optim.minimizer(res)))

Or you can try the awesome BlackBoxoptim.jl

EDIT: To elaborate a bit on this:
By simply plotting the value of loss(u) around the p₀ you can visualize how challenging your problem actually is:
image

you can see multiple local minima next to the global one

3 Likes

Thanks! I gave it a shot and it worked! I’ll know how to investigate tractability in future.

Thank you for teaching me the name of such a method, didn’t know it had one.

Oh wow, that’s really insightful.