Using optim.jl ( optimization ) - like fminsearch in matlab

only two questions:

println(Optim.minimizer(opt)) == [0.0763848, -0.0856919] - ---- this is the x[1] = 0.07 and x[2] = -0.08 value who best fit all my R and experimental A_exp values ??

A_opt = Optim.minimum(opt) ==151.315 … – what is this value ???

in my code above code using neldermead.jl , for all aRe == R and experimental omega values, my fit gives only ONE x[1] and x[2], who best fit all the values simultaneous … is this what you do ?

The answer to your questions:

  • Best fit: yes, to all values of A_exp. If you don’t understand what it is done, set x = [0.0763848, -0.0856919] and run the following two commands
A(x) = quadgk(g -> dA(x,g), 0 , 10)[1]
X2(x) = sum(abs2, A(x) - A_exp)

then try to vary x to see what happens to A(x) and X2(x);

  • what is Optim.minimum? A quick look at the manual will teach you the basic commands of Optim so that you can make a better use of this package; it is the value X2([0.0763848, -0.0856919])

My requests now:

  • Do you understand that the integrand depends on the product x[1]*x[2] and can you turn this problem into a scalar problem by using a single variable y? Does it give the same result? Can you post the code so that, if I need to give you more help, I know at what point you stand? You will have to use the Optim function for univariate optimization optimize(objective_function, lower_bound, upper_bound).
  • To make the problem easier, you replace the numerical integral quadgk with the analytical formula for the integral. Can you also add that to the code I am asking you to post? You will too gain a better insight in the problem.
1 Like

I know our wording has been horrible (and is) in the docs, but do remember that a scalar optimization problem also fits into the heuristics(besides neldermead)/gradient/hessian part of the package, just remember to pass the “scalar” in a 1-element array.

using Optim
method = GradientDescent() # choose a method
optimize(x->sin(first(x)), rand(1), method, Optim.Options())

with output

Results of Optimization Algorithm
 * Algorithm: Gradient Descent
 * Starting Point: [0.27326526556922004]
 * Minimizer: [-1.570796326797947]
 * Minimum: -1.000000e+00
 * Iterations: 3
 * Convergence: true
   * |x - x'| < 1.0e-32: false 
     |x - x'| = 1.22e-07 
   * |f(x) - f(x')| / |f(x)| < 1.0e-32: false
     |f(x) - f(x')| / |f(x)| = 7.44e-15 
   * |g(x)| < 1.0e-08: true 
     |g(x)| = 5.84e-12 
   * stopped by an increasing objective: false
   * Reached Maximum Number of Iterations: false
 * Objective Calls: 10
 * Gradient Calls: 10

which is indeed a local minimum.

Thank you for the clarification. Yes, this is missing in the docs and the error message leads one to believe that in 1D only optimization methods with bounds are possible. I can try to make a PR clarify this in the docs.

I know, and this is because

    optimize(f, x) # or
    optimize(f, x, Optim.Options()

falls back to

   optimize(f, x, NelderMead(), Optim.Options()

and that does actually not work with a one dimensional array. While I think it should surprise no one that optimizing over a one dimensional vector is a “multivariate” optimization problem with a 1 dimensional vector, the error message, as you state, does make the user think that it would fail if a 1 element array was passed.

If you aren’t already working on the PR, don’t worry about it, I’m putting together a related PR and I can just makes these changes. If you are already working on it, I won’t touch it, and I’ll say thank you instead :slight_smile:

Ehm, no.

Hello again … now i am doing the job in my own program … now i did the imput to my problem, i think … i can not put my entire program here, i will put only the important part, and tell me what is going one …
optim.jl , the way i am using it, is not working … it is not giving the correct value of x[1] and x[2] ( remember, i want just one value of each to fit all my experimental values together ) – look the ### commented lines
in my program i have this :slight_smile:

function dΩ11(x,g,sigma,T)
    R = 0.388/sigma   # R in aRe 

    α = 0 
    ϕ = 0.9*(1-1/(1+(R/x[1])^x[2]))  # x[1] and x[2] values here !!
    θ = ϕ*exp(-α*g^2/T)
    γ = 1 
  
 g^5*θ*Qb(g,α)*exp(-g^2/T) + g^5*(1-θ)*γ*Qa(g)*exp(-g^2/T) +
    g^5*(1-θ)*(1-γ)*Qc(g)*exp(-g^2/T)
end

Ω11(x,sigma,T) = 1/T^3*quadgk(g-> dΩ11(x,g,sigma,T) ,0 , 10, abstol=1e-3)[1]  # here is the integral the integral that #  uses x[1] and x[2] calculated by the optim.jl 

function X2(x)
    aΩ11 = zeros( lenR )
    for i in lenR
        aΩ11[i] = afΩ11[i](x)
    end
    # sum((aΩ11-aΩ11e).^2)
    sum(abs2,aΩ11-aΩ11e)   # aΩ11 is the results of Ω11(x,sigma,T) for the array of R in aRe ( another name for R )
end
    opt = optimize(X2, [2.5, 17.5],NelderMead(), Optim.Options(g_tol=1e-32))  # principal imput for optim.jl ... works, but give wrong results .. 
println(Optim.minimizer(opt))  # gives the results of x[1] and x[2] ... ( wrong results ) - i did manually the best value for x[1] and x[2] under this conditions ... and i know the x[1] and x[2] values ... i just testing the program to get confiance 
A_opt = Optim.minimum(opt)  # i do not know what is it !! can you answer me ? 

plot(aRe, aΩ11e) # plot my experimental points 
plot(aRe, A_opt) # i wish this plots my calculated point using x[1] and x[2] , but is not work ... gives this error : PyError #(:PyObject_Call) <type 'exceptions.ValueError'>
#ValueError(u'x and y must have same first dimension, but have shapes (11,) and (1,)',)

aditional informations :slight_smile:

aRe = [1.28; 1.47; 1.65; 1.94; 2.25; 2.39; 2.68; 2.91; 3.13; 3.75; 4.46]              # Array with experimental R 
aΩ11e = [2.52; 2.26; 2.07; 1.92; 1.90; 2.08; 2.31; 2.53; 2.68; 2.51; 2.27]           # Array with  Ω11 experimental 
aΩ11 = zeros(length(aRe))   # Array com os valores calculados com cada função afΩ11[i](sigma)
lenR = length(aRe) 


i know it is not a working example anymore, but i need two things :slight_smile:

1 ) a best method to optmize this, and get the correct x[1] and x[2] ( i am already know what is ), maybe change the options or methods …

  1. how to plot the calculated points of omega11 using x[1] and x[2]

if you can, read my last post … thanks