Optimization and arbitrary precision

Hi all,

I’ve been exploring some packages for solving nonlinear least squares problems. LsqFit.jl, LeastSquaresOptim.jl and Optim.jl are the ones I’ve come across so far and all of them give me great results.

However, I’ve been meaning to solve some problems using arbitrary precision. It’s not entirely clear to me whether these solvers can handle arbitrary precision or not. A simple test seems to indicate that only Optim.jl does.

So this toy model code (from LsqFit's examples) works with all mentioned solvers:

rng = MersenneTwister(42)
@. model(x, p) = p[1]*exp(-x*p[2])
xdata = range(0, stop=10, length=20)
ydata = model(xdata, [1.0 2.0]) + 0.01*rand(rng,length(xdata))
lb = [1.1, -0.5]
ub = [1.9, Inf]
p0 = [1.2, 1.2]

fit = lsq.curve_fit(model, xdata, ydata, p0, lower=lb, upper=ub)
fit.param

residual(p) = (ydata-model(xdata,p))
lso.optimize(residual, p0, lso.Dogleg(), lower=lb, upper=ub)

residual_sum(p) = sum(abs2.(residual(p)))
opt.optimize(residual_sum, lb, ub, p0)

Using BigFloat on the other hand

rng = MersenneTwister(42)
@. model(x, p) = p[1]*exp(-x*p[2])
xdata = range(BigFloat(0), stop=10, length=20)
ydata = model(xdata, [1.0 2.0]) + 0.01*rand(rng,length(xdata))
lb = BigFloat[1.1, -0.5]
ub = BigFloat[1.9, Inf]
p0 = BigFloat[1.2, 1.2]

fit = lsq.curve_fit(model, xdata, ydata, p0, lower=lb, upper=ub)
fit.param

residual(p) = (ydata-model(xdata,p))
lso.optimize(residual, p0, lso.Dogleg(), lower=lb, upper=ub)

residual_sum(p) = sum(abs2.(residual(p)))
opt.optimize(residual_sum, lb, ub, p0)

Yileds the following errors for

LsqFit.jl

MethodError: no method matching LsqFit.LsqFitResult(::Int64, ::Array{BigFloat,1}, ::Array{BigFloat,1}, ::Array{Float64,2}, ::Bool, ::Array{BigFloat,1})
Closest candidates are:
  LsqFit.LsqFitResult(::Int64, ::Array{T,1}, ::Array{T,1}, !Matched::Array{T,2}, ::Bool, ::Array{T,N}) where {T, N} at /mnt/juliabox/.julia/packages/LsqFit/IsCnQ/src/curve_fit.jl:2

Stacktrace:
 [1] #lmfit#2(::Base.Iterators.Pairs{Symbol,Array{BigFloat,1},Tuple{Symbol,Symbol},NamedTuple{(:lower, :upper),Tuple{Array{BigFloat,1},Array{BigFloat,1}}}}, ::Function, ::getfield(LsqFit, Symbol("#f#5")){typeof(model),LinRange{BigFloat},Array{BigFloat,1}}, ::getfield(Calculus, Symbol("#g#5")){getfield(LsqFit, Symbol("#f#5")){typeof(model),LinRange{BigFloat},Array{BigFloat,1}},Symbol}, ::Array{BigFloat,1}, ::Array{BigFloat,1}) at /mnt/juliabox/.julia/packages/LsqFit/IsCnQ/src/curve_fit.jl:16
 [2] (::getfield(LsqFit, Symbol("#kw##lmfit")))(::NamedTuple{(:lower, :upper),Tuple{Array{BigFloat,1},Array{BigFloat,1}}}, ::typeof(LsqFit.lmfit), ::Function, ::Function, ::Array{BigFloat,1}, ::Array{BigFloat,1}) at ./none:0
 [3] #lmfit#3 at /mnt/juliabox/.julia/packages/LsqFit/IsCnQ/src/curve_fit.jl:37 [inlined]
 [4] #lmfit at ./none:0 [inlined]
 [5] #curve_fit#4 at /mnt/juliabox/.julia/packages/LsqFit/IsCnQ/src/curve_fit.jl:74 [inlined]
 [6] (::getfield(LsqFit, Symbol("#kw##curve_fit")))(::NamedTuple{(:lower, :upper),Tuple{Array{BigFloat,1},Array{BigFloat,1}}}, ::typeof(LsqFit.curve_fit), ::Function, ::LinRange{BigFloat}, ::Array{BigFloat,1}, ::Array{BigFloat,1}) at ./none:0
 [7] top-level scope at In[169]:1

and LeastSquaresOptim.jl

MethodError: no method matching mul!(::Array{BigFloat,1}, ::LinearAlgebra.Adjoint{BigFloat,Array{BigFloat,2}}, ::Array{BigFloat,1}, ::BigFloat, ::BigFloat)
Closest candidates are:
(...)
Stacktrace:
 [1] #optimize!#15(::Float64, ::Float64, ::Float64, ::Int64, ::Float64, ::Bool, ::Bool, ::Int64, ::Array{BigFloat,1}, ::Array{BigFloat,1}, ::typeof(LeastSquaresOptim.optimize!), ::LeastSquaresOptim.LeastSquaresProblemAllocated{Array{BigFloat,1},Array{BigFloat,1},getfield(LeastSquaresOptim, Symbol("##9#10")){typeof(residual)},Array{BigFloat,2},getfield(LeastSquaresOptim, Symbol("##3#5")){Array{BigFloat,1},getfield(LeastSquaresOptim, Symbol("##9#10")){typeof(residual)},DiffEqDiffTools.JacobianCache{Array{BigFloat,1},Array{BigFloat,1},Array{BigFloat,1},Val{:central},BigFloat,Val{true}}},LeastSquaresOptim.AllocatedDogleg{Array{BigFloat,1},Array{BigFloat,1},Array{BigFloat,1},Array{BigFloat,1},Array{BigFloat,1},Array{BigFloat,1}},LeastSquaresOptim.DenseQRAllocatedSolver{Array{BigFloat,2},Array{BigFloat,1}}}) at /mnt/juliabox/.julia/packages/LeastSquaresOptim/qG3mG/src/optimizer/dogleg.jl:100
 [2] #optimize! at ./none:0 [inlined]
 [3] #optimize!#11 at /mnt/juliabox/.julia/packages/LeastSquaresOptim/qG3mG/src/types.jl:145 [inlined]
 [4] (::getfield(LeastSquaresOptim, Symbol("#kw##optimize!")))(::NamedTuple{(:lower, :upper),Tuple{Array{BigFloat,1},Array{BigFloat,1}}}, ::typeof(LeastSquaresOptim.optimize!), ::LeastSquaresOptim.LeastSquaresProblem{Array{BigFloat,1},Array{BigFloat,1},getfield(LeastSquaresOptim, Symbol("##9#10")){typeof(residual)},Array{BigFloat,2},getfield(LeastSquaresOptim, Symbol("##3#5")){Array{BigFloat,1},getfield(LeastSquaresOptim, Symbol("##9#10")){typeof(residual)},DiffEqDiffTools.JacobianCache{Array{BigFloat,1},Array{BigFloat,1},Array{BigFloat,1},Val{:central},BigFloat,Val{true}}}}, ::LeastSquaresOptim.Dogleg{Nothing}) at ./none:0
 [5] #optimize#8(::Symbol, ::Base.Iterators.Pairs{Symbol,Array{BigFloat,1},Tuple{Symbol,Symbol},NamedTuple{(:lower, :upper),Tuple{Array{BigFloat,1},Array{BigFloat,1}}}}, ::Function, ::typeof(residual), ::Array{BigFloat,1}, ::LeastSquaresOptim.Dogleg{Nothing}) at /mnt/juliabox/.julia/packages/LeastSquaresOptim/qG3mG/src/types.jl:141
 [6] (::getfield(Optim, Symbol("#kw##optimize")))(::NamedTuple{(:lower, :upper),Tuple{Array{BigFloat,1},Array{BigFloat,1}}}, ::typeof(Optim.optimize), ::Function, ::Array{BigFloat,1}, ::LeastSquaresOptim.Dogleg{Nothing}) at ./none:0
 [7] top-level scope at In[170]:2

I just wanted to make sure I wasn’t overlooking anything and whether or not these libraries really can’t handle arbitrary precision/BigFloat yet.

As a side note, I still have not considered looking into external solvers (e.g. IPOPT through JuMP) because I don’t think they could handle arbitrary precision due to the calls to another language. Or am I wrong on this?

A while ago I wrote https://github.com/rdeits/NNLS.jl . It’s a direct port of the Lawson & Hanson FORTRAN code for solving non-negative least squares problems, but it’s fully type-generic, so you can actually use it to solve NNLS problems with BigFloats and other number types, e.g. https://github.com/rdeits/NNLS.jl/blob/5833899d31c840a318de6395b0e400da507e00c1/test/nnls.jl#L1

The package is not registered, but you can add it directly via the github URL:

pkg> add https://github.com/rdeits/NNLS.jl

Unfortunately, I don’t have the expertise to guarantee that it is actually correctly making use of that additional precision, but it has worked well so far. I’d be very interested to hear if it works for your use case.

Thanks for the heads up @rdeits!

However, I’ll be mostly working with Single Shooting/Multi Shooting (and equivalent) methods for estimating parameters from Ordinary Differential Equations extracted from chemical kinetics models. Something along the lines of my previous question.

For instance, one of the simpler problems is

function floudas_five(dz_dt, z, phi, t)
    dz_dt[1] = - (2*phi[1] - ((phi[1]*z[2])/((phi[2] + phi[5])*z[1] + z[2])) + phi[3] + phi[4])*z[1]
    dz_dt[2] = ((phi[1]*z[1])*(phi[2]*z[1] - z[2]))/((phi[2] + phi[5])*z[1] + z[2]) + phi[3]*z[1]
    dz_dt[3] = ((phi[1]*z[1])*(z[2] + phi[5]*z[1]))/((phi[2] + phi[5])*z[1] + z[2]) + phi[4]*z[1]
end

from this Handbook of Test Problems (chapter 15.3), Floudas et. al..

And I also intend to work towards using with Differential Algebraic Equations as well.

So in the end, in case I’m not mistaken, transforming my problem to the matrix form required by your package might not be the easiest of tasks.

Nonetheless, thanks again for the quick reply!