Curve fitting with NonlinearSolve.jl

I am trying to familiarize myself with the NonlinearSolve ecosystem for curve fitting and am having difficulty getting a simple fit to an exponential function. I have seen other discussions on doing this here, but I’m not sure where I’m going wrong.

using NonlinearSolve
using GLMakie

model(x, p) = @. p[1] * exp(-x / p[2]) + p[3]

xdata = range(0, 10, length=100)
ydata = model(xdata, [1.0, 2.0, 0.0]) .+ 0.1 * randn(length(xdata))

function loss(p, data)
    x = data[1]
    y = data[2]
    Y_pred = @. p[1] * exp(-x / p[2]) + p[3]
    return sum(abs2, Y_pred .- y)
end

p0 = [0.8, 0.4, 0.2]
data = [xdata, ydata]
prob = NonlinearLeastSquaresProblem(loss, p0, data)
res = solve(prob, LevenbergMarquardt())

fig = Figure()
ax = Axis(fig[1, 1])
lines!(xdata, ydata)
# lines!(xdata, model(xdata, params))
fig

Error message hits at solve():

ERROR: MethodError: no method matching similar(::Float64, ::Type{Float64}, ::Int64, ::Int64)

Closest candidates are:
  similar(::RecursiveArrayTools.AbstractVectorOfArray, ::Any...)
   @ RecursiveArrayTools ~/.julia/packages/RecursiveArrayTools/xGKIm/src/vector_of_array.jl:711
  similar(::AbstractArray, ::Type{T}, ::Union{Integer, AbstractUnitRange}...) where T
   @ Base abstractarray.jl:833
  similar(::Type{T}, ::Union{Integer, AbstractUnitRange}...) where T<:AbstractArray
   @ Base abstractarray.jl:875
  ...

Stacktrace:
  [1] init_jacobian(::Nothing, ::Type{Float64}, fx::Float64, x::Vector{Float64}; kwargs::@Kwargs{preserve_immutable::Val{…}})
    @ SparseDiffTools ~/.julia/packages/SparseDiffTools/jpC2n/src/highlevel/common.jl:329
  [2] init_jacobian(c::SparseDiffTools.ForwardDiffJacobianCache{…}; preserve_immutable::Val{…})
    @ SparseDiffTools ~/.julia/packages/SparseDiffTools/jpC2n/src/highlevel/common.jl:325
  [3] NonlinearSolve.JacobianCache(prob::NonlinearLeastSquaresProblem{…}, alg::GeneralizedFirstOrderAlgorithm{…}, f::NonlinearFunction{…}, fu_::Float64, u::Vector{…}, p::Vector{…}; stats::SciMLBase.NLStats, autodiff::Nothing, vjp_autodiff::Nothing, jvp_autodiff::Nothing, linsolve::Nothing)
    @ NonlinearSolve ~/.julia/packages/NonlinearSolve/sETeN/src/internal/jacobian.jl:88
  [4] __init(::NonlinearLeastSquaresProblem{…}, ::GeneralizedFirstOrderAlgorithm{…}; stats::SciMLBase.NLStats, alias_u0::Bool, maxiters::Int64, abstol::Nothing, reltol::Nothing, maxtime::Nothing, termination_condition::Nothing, internalnorm::Function, linsolve_kwargs::@NamedTuple{}, kwargs::@Kwargs{})
    @ NonlinearSolve ~/.julia/packages/NonlinearSolve/sETeN/src/core/generalized_first_order.jl:173
  [5] __init
    @ ~/.julia/packages/NonlinearSolve/sETeN/src/core/generalized_first_order.jl:154 [inlined]
  [6] __solve(::NonlinearLeastSquaresProblem{…}, ::GeneralizedFirstOrderAlgorithm{…}; stats::SciMLBase.NLStats, kwargs::@Kwargs{})
    @ NonlinearSolve ~/.julia/packages/NonlinearSolve/sETeN/src/core/generic.jl:3
  [7] __solve
    @ ~/.julia/packages/NonlinearSolve/sETeN/src/core/generic.jl:1 [inlined]
  [8] #solve_call#44
    @ ~/.julia/packages/DiffEqBase/slKc5/src/solve.jl:612 [inlined]
  [9] solve_call
    @ ~/.julia/packages/DiffEqBase/slKc5/src/solve.jl:569 [inlined]
 [10] #solve_up#53
    @ ~/.julia/packages/DiffEqBase/slKc5/src/solve.jl:1072 [inlined]
 [11] solve_up
    @ ~/.julia/packages/DiffEqBase/slKc5/src/solve.jl:1066 [inlined]
 [12] #solve#51
    @ ~/.julia/packages/DiffEqBase/slKc5/src/solve.jl:1003 [inlined]
 [13] solve(prob::NonlinearLeastSquaresProblem{…}, args::GeneralizedFirstOrderAlgorithm{…})
    @ DiffEqBase ~/.julia/packages/DiffEqBase/slKc5/src/solve.jl:993
 [14] top-level scope
    @ ~/Documents/projects/scratch/test_optimization_pkg.jl:19
Some type information was truncated. Use `show(err)` to see complete types.

Additional info

I have abandoned LsqFit.jl because I have difficulty getting it to resolve many problems (and because of recommendations by others against using it).

I often use Optim.jl for this kind of thing, but I wanted to try out NonlinearSolve.jl since there are discussions of a simple CurveFit package coming out of this ecosystem.

Without knowing what is going on here, I guess that the least-squares problem wants a residual function rather than a loss function, i.e.,

function residual(p, data)
    x = data[1]
    y = data[2]
    Y_pred = @. p[1] * exp(-x / p[2]) + p[3]
    return Y_pred .- y
end

Possibly, this funciton is expected to operate in-place

2 Likes

Yes, that’s it. Don’t know why I didn’t try that. Thank you.

It’s now able to correctly fit the parameters, but it returns a Stalled message. This appears to indicate a success: Nonlinear Solutions · NonlinearSolve.jl

retcode: Stalled
u: 3-element Vector{Float64}:
  1.0233648648199372
  1.9741533584792492
 -0.01269105605366064

A follow-up question to whoever knows:

When using Optim.jl, I like to make a general loss function so I can pass whatever model I like to it, like this:

function loss(p, f, x, y)
    return sum(abs2, f(x, p) .- y)
end

p0 = [0.8, 0.4, 0.2]
res = optimize(b -> loss(b, model, xdata, ydata), p0)

At least from reading the documentation, I don’t see a way to do this with NonlinearSolve.jl. Is this possible?

NonlinearSolve.jl is not a general optimization package, that’s Optimization.jl. But nonlinear least squares is not a general optimization problem, you can specialize a linear solve problem to in order to solve it with Newton methods using first derivatives, rather than second derivatives (Hessians), with some tricks that accelerate it (GMRES, QR factorization, etc.) in comparison to doing a full Newton solve. So least squares fits live in this funny middle ground where they are more like nonlinear solvers in practice while being somewhat a form of an optimization problem. I’m still trying to find out the most effective way to document this in the ecosystem.

1 Like

Awesome, thanks for the clarification.

If I were to use Optimization.jl (or any of the package wrappers), how do I formulate an OptimizationProblem? I’m trying a similar format as NonlinearSolve.jl but it doesn’t seem to like it. (also tried a couple other ways)

using Optimization

function loss(p, data)
    x, y = data
    Y_pred = @. p[1] * exp(-x / p[2]) + p[3]
    return sum(abs2, Y_pred .- y)
end

p0 = [0.8, 0.4, 0.2]
data = [xdata, ydata]
prob = OptimizationProblem(loss, p0, data)
res = solve(prob)

Error message at solve():

ERROR: MethodError: no method matching init(::OptimizationProblem{…})

Closest candidates are:
  init(::OptimizationProblem, ::Any, ::Any...; kwargs...)
   @ SciMLBase ~/.julia/packages/SciMLBase/jU6iu/src/solve.jl:172
  init(::PDEProblem, ::SciMLBase.AbstractDEAlgorithm, ::Any...; kwargs...)
   @ DiffEqBase ~/.julia/packages/DiffEqBase/slKc5/src/solve.jl:1162
  init(::LinearProblem, ::Nothing, ::Any...; assumptions, kwargs...)
   @ LinearSolve ~/.julia/packages/LinearSolve/B82H3/src/default.jl:290
  ...

Stacktrace:
 [1] solve(args::OptimizationProblem{…}; kwargs::@Kwargs{})
   @ CommonSolve ~/.julia/packages/CommonSolve/JfpfI/src/CommonSolve.jl:23
 [2] top-level scope
   @ ~/Documents/projects/scratch/test_Optimization.jl:18
Some type information was truncated. Use `show(err)` to see complete types.

Optimization.jl just doesn’t have a default solver at this time. For now, you need to pass an optimizer.

Got it, thank you. For others, here is the working code for Optimization.jl, where I also imported Optim.jl via OptimizationOptimJL:

using Optimization
using OptimizationOptimJL
using GLMakie

model(x, p) = @. p[1] * exp(-x / p[2]) + p[3]

xdata = range(0, 10, length=100)
ydata = model(xdata, [1.0, 2.0, 0.0]) .+ 0.1 * randn(length(xdata))

function loss(p, data)
    x, y = data
    Y_pred = @. p[1] * exp(-x / p[2]) + p[3]
    return sum(abs2, Y_pred .- y)
end

p0 = [0.8, 0.4, 0.2]
data = [xdata, ydata]
prob = OptimizationProblem(loss, p0, data)
res = solve(prob, Optim.NelderMead())

fig = Figure()
ax = Axis(fig[1, 1])
lines!(xdata, ydata)
lines!(xdata, model(xdata, res.u))
fig
3 Likes