# Optimization using SAMIN()

Hi,

I am trying to use SAMIN() for constrained optimization for a DDE parameter estimation, I have seen the example in [http://julianlsolvers.github.io/Optim.jl/latest/#algo/samin/]
and I am following the same syntax, but get the error of Dimension mismatch.

`resid` is the residuals function, the difference of the model and the data, and `p` is the initial guess.
Here are my code and error. I appreciate if you could help me with this error.

`res[1,:] = sol(t, idxs=1) - G1_new` etc. should be sufficient. But the optimizer likely wants a scalar output, so you need to output something like `norm(res)`.

2 Likes

Thank you so much, Chris. It worked, although, not converging.

How many â€śiterationsâ€ť and what parameters do you use?

2 Likes

I tried 10^4 iterations, it threw a warning that it is not stable (?) I have 9 parameters, basically, it is a two-equations delay differential equation model, I am also trying to estimate the initial values and the coefficient of h(p, t), as well as the coefficient of other terms.
I tried to use SAMIN() for a simpler one, which only had two parameters, but it didnâ€™t converge either.

I am just passing the norm of the difference between data and the mode, is it correct to do that? I tried the sum of squared errors, but it worked worse.
This is my model:

``````function G1_G2(du, u, h, p, t)
du[1] = -p[1]*(h(p, t-p[5])[1]) + 2*p[2]*(h(p, t-p[6])[2]) - p[3]*u[1]
du[2] = p[1]*(h(p, t-p[5])[1]) - p[2]*(h(p, t-p[6])[2]) - p[4]*u[2]
end
``````

I have data for both u[1] and u[2], I was giving both to the optimizer with `Dogleg()` but here since it has to be a scalar, I am just passing norm of one of them.

Here is the summary:

In that output, it looks like the iter limit is 10^3. Try setting it to 10^8, thereâ€™s no reason to set it low, unless function evaluation is too costly. If you find itâ€™s taking too long, try setting rt=0.5. Tuning rt is a bit of an art, the safe procedure is to leave it not too far below 1, but too close leads to very slow convergence.

2 Likes

I tried to set the iterations to a 10^6 it was taking forever, I thought something might be wrong.
I will do it. Thank you very much.

itâ€™s solving a DDE each time. That can get costly.

Well, setting a low iteration limit just wonâ€™t work with SA, unless rt is also adjusted downward so that the algorithm focuses in quickly. And, doing this, youâ€™re unlikely to find the global min unless the objective function is fairly regular. If you want the right answer and your problem is costly, youâ€™re going to have to be willing to wait a bit.

If you do set a low iteration limit, then you need to set rt low enough so that it converges within the limit. Then, if you run it again a few times and get the same result, you have some evidence that the solution is a global min.

1 Like

I usually find it useful to plot the optimisation trace to understand whatâ€™s going on, but SAMIN seems to store nothing, is that an oversight ?

``````mfit = optimize(
x->sum(abs2,x),fill(-5.,3),fill(5.,3),randn(3), Optim.SAMIN(),
Optim.Options(f_calls_limit=10_000,store_trace=true,iterations=10_000,extended_trace=true),

julia> mfit.trace
0-element Array{OptimizationState{Float64,SAMIN{Float64}},1}
)
``````

Iâ€™m 23 days ahead of you https://github.com/JuliaNLSolvers/Optim.jl/commit/d9fbf09db2fc7f8ba0ff7e47aeaa737f9ec3eeea but way behind on tagging
edit: let me figure out registrator
edit2: i guess i need a Project file. Iâ€™ll try to fix that tonight

3 Likes