Using optim.jl ( optimization ) - like fminsearch in matlab

do not work … read my post when you can

when i do for x[1] and x[2], the result is the starting point …

function dΩ11(x,g,sigma,T)
>     R = 0.388/sigma
> 
>     α = 0 
>     ϕ = 0.9*(1-1/(1+(R/x[1])^x[2]))  # x[1] and x[2] here !!!! i need to find x[1] and x[2] who d\omega11 = aΩ11e
>     θ = ϕ*exp(-α*g^2/T)
>   
>     γ = 1 
>     g^5*θ*Qb(g,α)*exp(-g^2/T) + g^5*(1-θ)*γ*Qa(g)*exp(-g^2/T) +
>     g^5*(1-θ)*(1-γ)*Qc(g)*exp(-g^2/T)
> end

The first line of your error says it clearly: Use optimize(f, scalar, scalar) for 1D problems, where the two scalars are the bounds for your optimization variable, see Minimizing a function - Optim.jl

You were already told that the X2 you provided does not depend on the optimization variable x: you are trying to minimize a constant.

1 Like

look … Using optim.jl ( optimization ) - like fminsearch in matlab

Hello mzaffalon … sorry about boring you … but i do not understand how to use it

in my topic, i have

dΩ11(x,g,sigma,T) = g^5*θ*Qb(g,α)*exp(-g^2/T) + g^5*(1-θ)*γ*Qa(g)*exp(-g^2/T) +
>     g^5*(1-θ)*(1-γ)*Qc(g)*exp(-g^2/T)  ,  for

R = 0.388/sigma
> 
>     α = 0 
>     ϕ = 0.9*(1-1/(1+(R/x[1])^x[2]))  # x[1] and x[2] here !!!! 
>     θ = ϕ*exp(-α*g^2/T)
>   
>     γ = 1

and i want to know for what x[1] and x[2] coefficient , dΩ11(x,g,sigma,T) = aΩ11e 

aΩ11e = [2.52; 2.26; 2.07; 1.92; 1.90; 2.08; 2.31; 2.53; 2.68; 2.51; 2.27]
for R = aRe = [1.28; 1.47; 1.65; 1.94; 2.25; 2.39; 2.68; 2.91; 3.13; 3.75; 4.46]  

so, optim,jl solve this problem ???

for i in 1:length(aΩ11e)
  minx = Optim.minimizer(optimize(x -> (dΩ11(x,g,sigma,T) - aΩ11e[i])^2, [1.3, 2.50], NelderMead())
  println(minx)
end

if g, sigma and T have already been defined.

Nelder-Mead may find a local (not a global) minimum and it is sometimes known to fail to converge even to the local minimum.

Hello … thanks again … all variables are been defined … R, sigma , T

for R in aRe = [1.28; 1.47; 1.65; 1.94; 2.25; 2.39; 2.68; 2.91; 3.13; 3.75; 4.46]              # Array with experimental R
aΩ11e = [2.52; 2.26; 2.07; 1.92; 1.90; 2.08; 2.31; 2.53; 2.68; 2.51; 2.27]  # array with experimental  Ω11
sigma= 0.388/ R
T = 0.0987

but … you write (dΩ11(x,g,sigma,T) - aΩ11e[i])^2 – i think it is (Ω11 - aΩ11e[i]).^2 ( or something like that ), because in the

Ω11 = quadgk ( g-> dΩ11 , 0 ,10 ) the program will find x[1] and x[2] values to put in Ω11 and compare with the values of aΩ11e … so i think this is wrong … dΩ11 is not a calculation, only a definition … the values of x[1] and x[2] will be find doing Ω11 = quadgk ( g-> dΩ11 , 0 ,10 )

remember :slight_smile:

function dΩ11(x,g,sigma,T)
    R = 0.388/sigma

    α = 0 # x[1]  # x[] são as constantes a serem calculadas pelo metodo neldermead
    ϕ = 0.9*(1-1/(1+(R/x[1])^x[2]))
    θ = ϕ*exp(-α*g^2/T)
    #θ(g)=ϕ*exp(-x[1]*g^2/T)
    γ = 1 #x[3]
    g^5*θ*Qb(g,α)*exp(-g^2/T) + g^5*(1-θ)*γ*Qa(g)*exp(-g^2/T) +
    g^5*(1-θ)*(1-γ)*Qc(g)*exp(-g^2/T)
end

Ω11(x,sigma,T) = 1/T^3*quadgk(g-> dΩ11(x,g,sigma,T) ,0 , 10, abstol=1e-3)[1]

for for which values of x[1] and x[2] , Ω11 = aΩ11e , ( and not dΩ11(x,g,sigma,T) = aΩ11e )

your code do not work, because they do not know g, sigma, T …
but this code, using ( include:: neldermead.jl ) works -

function X2(x)
    aΩ11 = zeros( lenR )
    for i in lenR
        aΩ11[i] = afΩ11[i](x)
    end
    sum((aΩ11-aΩ11e).^2)
end

 x, X2min = NelderMead(x->X2(x), [2.50;17.2])
       println(x[1],"   ",x[2])

my question is how to use optim,jl ( package already installed in julia, instead of neldermead.jl )

you can find the brent.jl in the internet, github … and then you can run my entire code, i will put it here

thanks a lot for now, but if you cah help me more, thanks … !!

using  NLsolve, QuadGK, Interpolations, Optim





 include("C:\\Users\\Lucas\\Desktop\\LUCAS\\Julia\\brent.jl")


# Variáveis globais

aRe = [1.28; 1.47; 1.65; 1.94; 2.25; 2.39; 2.68; 2.91; 3.13; 3.75; 4.46]              # Array com os R experimentais
aΩ11e = [2.52; 2.26; 2.07; 1.92; 1.90; 2.08; 2.31; 2.53; 2.68; 2.51; 2.27]           # Array com os Ω11 experimentais
aΩ11 = zeros(length(aRe))   # Array com os valores calculados com cada função afΩ11[i](sigma)
lenR = length(aRe)
afΩ11 = []      # Arrray de funções: uma para cada sigma
Qa = []     # Variável onde será atribuída a função Qa(g)
Qb = []     # Variável onde será atribuída a função Qb(g,α)
Qc = []     # Variável onde será atribuída a função Qc(g)
T = 0.0987

#
# Potencial  e suas derivadas
#

Φ(r,sigma) = 2/15*sigma.^9*(1/(r-1)^9-1/(r+1)^9-9/8/r*(1/(r-1)^8-1/(r+1)^8))-
                   sigma.^3*(1/(r-1)^3-1/(r+1)^3-3/2/r*(1/(r-1)^2-1/(r+1)^2))

function Φl(r,sigma)
     ((3*sigma.^3)/(2*r^2*(1+r)^2)+(3*sigma.^3)/(r*(1+r)^3)-(3*sigma.^3)/(1+r)^4
                                 -(3*sigma.^9)/(20*r^2*(1+r)^8)
                                 -(6*sigma.^9)/(5*r*(1+r)^9)
                                 +(6*sigma.^9)/(5*(1+r)^10)
                                 -(3*sigma.^3)/((r-1)^3*r)
                                 +(6*sigma.^9)/(5*(r-1)^9*r)
                                 -(3*sigma.^3)/(2*(r-1)^2*r^2)
                                 +(3*sigma.^9)/(20*(r-1)^8*r^2)
                                 +(3*sigma.^3)/(r-1)^4-(6*sigma.^9)/(5*(r-1)^10))
end

function Φll(r,sigma)
     ((-(3*sigma.^3)/(r^3*(1+r)^2))-(6*sigma.^3)/(r^2*(1+r)^3)
                             -(9*sigma.^3)/(r*(1+r)^4)+(12*sigma.^3)/(1+r)^5
                             +(3*sigma.^9)/(10*r^3*(1+r)^8)
                             +(12*sigma.^9)/(5*r^2*(1+r)^9)
                             +(54*sigma.^9)/(5*r*(1+r)^10)
                             -(12*sigma.^9)/(1+r)^11+(9*sigma.^3)/((r-1)^4*r)
                             -(54*sigma.^9)/(5*(r-1)^10*r)
                             +(6*sigma.^3)/((r-1)^3*r^2)
                             -(12*sigma.^9)/(5*(r-1)^9*r^2)
                             +(3*sigma.^3)/((r-1)^2*r^3)
                             -(3*sigma.^9)/(10*(r-1)^8*r^3)
                             -(12*sigma.^3)/(r-1)^5+(12*sigma.^9)/(r-1)^11)
end

#
# Potencial efetivo e suas derivadas
#

Veff(r,b,g,sigma) = Φ(r,sigma)+b^2*g^2/r^2
Veffl(r,b,g,sigma) = Φl(r,sigma)-2b^2*g^2/r^3
Veffll(r,b,g,sigma) = Φll(r,sigma)+6b^2*g^2/r^4

# O rc0, bc0 e gc0 vem das equações
# Φ(r)+b^2 g^2/r^2 = g^2
# Φ'(r)-2b^2 g^2/r^3 = 0
# Φ''(r)+6b^2 g^2/r^4 = 0
#
# Ques são equivalentes a
# Φ(r)+b^2 g^2/r^2 = g^2
# Φ(r)+rΦ'(r)/2 = g^2
# 3Φ'(r)+rϕ''(r) = 0
function rc0bc0gc0(sigma)
    rc0 = brent(r->3Φl(r,sigma)+r*Φll(r,sigma),1.02,5)
    gc0= sqrt(Φ(rc0,sigma)+rc0*Φl(rc0,sigma)/2)
    bc0 = rc0*sqrt(gc0^2-Φ(rc0,sigma))/gc0
    rc0, bc0, gc0
end

# Função que calcula o r e o b críticos para um dado g
function frbc(g, sigma)
    rc0,bc0,gc0 = rc0bc0gc0(sigma)
     if g == gc0 return rc0, bc0 end
     if g > gc0 || g == 0 error("g = $g, there is no critical b") end
     rc = rc0
     f(r,sigma) = Φ(r,sigma)+r*Φl(r,sigma)/2-g^2
     while f(rc0,sigma)*f(rc,sigma)>0 rc = rc+1 end
     rc = brent(r->f(r,sigma),rc0,rc)
     bc = sqrt(rc^2*(g^2-Φ(rc,sigma))/g^2)
     rc, bc
end

# Cálculo do r mínimo
function Rmin(b,g,sigma)
     if b > 100    # Este caso é quando Φ -> 0 e g^2r^2-b^2g^2+Φ=0
          rmin = b
          return rmin
     end
     rc0, bc0, gc0 = rc0bc0gc0(sigma)
     if g == gc0 return rc0 end    # É o caso de b e g críticos
     if g > gc0        # Neste caso não tem extremos
         rinicial = nextfloat(1.0)
         rfinal = rinicial
         while (g^2-Veff(rinicial,b,g,sigma))*(g^2-Veff(rfinal,b,g,sigma)) > 0 rfinal = rfinal+1 end
          rmin = brent(r->g^2-Veff(r,b,g,sigma),rinicial,rfinal)
          return rmin
     end
     # Neste caso Veff tem máximo e mínimo e temos b crítico
     rc, bc = frbc(g, sigma)
     if b == bc return rc end
     if b < bc
         rinicial = nextfloat(1.0)
         rfinal = rinicial
         while (g^2-Veff(rinicial,b,g,sigma))*(g^2-Veff(rfinal,b,g,sigma)) > 0 rfinal = rfinal+1 end
          rmin = brent(r->g^2-Veff(r,b,g,sigma),rinicial,rfinal)
     else
         rfinal = rc
         while (g^2-Veff(rc,b,g,sigma))*(g^2-Veff(rfinal,b,g,sigma)) > 0 rfinal = rfinal+1 end
          rmin = brent(r->g^2-Veff(r,b,g,sigma),rc,rfinal)
     end
     rmin
end

# Esta função retorna o integrando da função que dá o X (chi)
# A integração é feita em r (r* nos cálculos)
# X = pi - 2b*g*integrate(1/(r^2*sqrt(g^2-Veff)),r,rmin,Inf)
function dX(r,b,g,sigma)
        if g == 0 return 0 end
        rmin = Rmin(b,g,sigma)
       den1 = g^2 - Veff(r,b,g,sigma)
        den2 = -Veffl(rmin,b,g,sigma)*(r-rmin)
        # if den1<0 || den2 < 0
        #       warn("g^2-Veff = $den1 -Veffl*(r-rmin) = $den2, r = $r, b = $b, g = $g")
        #  end
        den1 = abs(den1)
        den2 = abs(den2)
        # The lines below are
        # 2b*g*/r^2(1/sqrt(den1)-1/sqrt(den2))
        t0 = 2*b*g/r^2
        t1 = 1/sqrt(den1)
       t2 = 1/sqrt(den2)
        if abs(t2/t1) < 0.5
            res = t0*t1*exp(log1p(-t2/t1))
        else
            res = t0*(t1-t2)
        end
        res
end

function X(b,g,sigma)
     if g > 1e5 return 0 end
     if g == 0 || b == 0 return π end
     rc0, bc0, gc0 = rc0bc0gc0(sigma)
     if b == bc0 && g == gc0 return -Inf end
     if 0 < g <= gc0
          rc, bc = frbc(g, sigma)
          if b == bc return -Inf end
     end
     rmin = Rmin(b,g,sigma)
     if b > 100 return 0 end
     if g < 0.2 && b > bc+0.2
           return -4π*sigma^3/g^2/b^6
     end
     int = 2b*g*pi/(2rmin^(3/2)*sqrt(abs(Veffl(rmin,b,g,sigma))))
     try
          int = int + quadgk(r->dX(r,b,g,sigma), rmin, Inf, abstol=1e-3)[1]
     catch
          println("b=$b, g=$g")
          int = int + quadgk(r->dX(r,b,g,sigma), BigFloat(rmin), Inf, abstol=1e-3)[1]
     end
     pi-int
end

function dQ(b,g,sigma)
     2*(1-cos(X(b,g,sigma)))*b
end

function dQd(b,g,sigma,α)
     2*(1+1/sqrt(1+α)*sqrt(π*T)/2g*sin(X(b,g,sigma)/2))*b
end

function dQi(b,g,sigma)
     2*(1+2/3*sin(X(b,g,sigma)/2))*b
end

function Q(g,sigma)
     if g > 1e5 return 0 end
     if g == 0 return Inf end
     rc0, bc0, gc0 = rc0bc0gc0(sigma)
     if g > gc0
          int = quadgk(b->dQ(b,g,sigma), 0, Inf, abstol=1e-3)[1]
     else
          rc, bc = frbc(g, sigma)
          int = quadgk(b->dQ(b,g,sigma), 0, bc-1e-3, abstol=1e-3)[1]
          int += quadgk(b->dQ(b,g,sigma), bc+1e-3, Inf, abstol=1e-3)[1]
     end
end

function Qd(g,sigma,α)
     if g > 1e5 return 0 end
     if g == 0 return Inf end
     rc0, bc0, gc0 = rc0bc0gc0(sigma)
     if g > gc0
          int = quadgk(b->dQ(b,g,sigma), 0, Inf, abstol=1e-3)[1]
     else
          rc, bc = frbc(g, sigma)
          int = quadgk(b->dQd(b,g,sigma,α), 0, bc-1e-3, abstol=1e-3)[1]
          int = int + quadgk(b->dQ(b,g,sigma), bc+1e-3, Inf, abstol=1e-3)[1]
     end
     int
end

function Qi(g,sigma)
     if g > 1e5 return 0 end
     if g == 0 return Inf end
     rc0, bc0, gc0 = rc0bc0gc0(sigma)
     if g > gc0
          int = quadgk(b->dQ(b,g,sigma), 0, Inf, abstol=1e-3)[1]
     else
          rc, bc = frbc(g, sigma)
          int = quadgk(b->dQi(b,g,sigma), 0, bc-1e-3, abstol=1e-3)[1]
          int = int + quadgk(b->dQ(b,g,sigma), bc+1e-3, Inf, abstol=1e-3)[1]
     end
     int
end

# Função que interpola valores de Q
# Isto é necessário porque a função Q é custosa para calcular
# Então calculamos alguns pontos e interpolamos uma função.
# Modo de usar:
#Qa = interpQa(sigma)
 # Depois usamos Qa(g)
function interpQa(sigma)
    ag = 0.1:1:10     # Array com os valores de g
    mQ = [Q(g,sigma) for g in ag]         # Matriz com os valores de Q na grade
    iQ = interpolate(mQ, BSpline(Cubic(Line())), OnGrid())    # Interpola na grade
    sQ = scale(iQ, ag)
    x->sQ[x]
end

# Modo de usar:
#Qb = interpQb(sigma)
 # Depois usamos Qb(g,α)
function interpQb(sigma)
    ag = 0.1:1:10     # Array com os valores de g
    aα = 0:1:5
    mQ = [Qd(g,sigma,α) for g in ag, α in aα]         # Matriz com os valores de Q na grade
    iQ = interpolate(mQ, BSpline(Cubic(Line())), OnGrid())    # Interpola na grade
    sQ = scale(iQ, ag, aα)
    (x,y) -> sQ[x,y]
end

# Modo de usar:
#Qc = interpQc(sigma)
 # Depois usamos Qc(g)
function interpQc(sigma)
    ag = 0.1:1:10     # Array com os valores de g
    mQ = [Qi(g,sigma) for g in ag]         # Matriz com os valores de Q na grade
    iQ = interpolate(mQ, BSpline(Cubic(Line())), OnGrid())    # Interpola na grade
    sQ = scale(iQ, ag)
    x->sQ[x]
end

function dΩ11(x,g,sigma,T)
    R = 0.388/sigma

    α = 0 # x[1]  # x[] são as constantes a serem calculadas pelo metodo neldermead
    ϕ = 0.9*(1-1/(1+(R/x[1])^x[2]))
    θ = ϕ*exp(-α*g^2/T)
    #θ(g)=ϕ*exp(-x[1]*g^2/T)
    γ = 1 #x[3]
    g^5*θ*Qb(g,α)*exp(-g^2/T) + g^5*(1-θ)*γ*Qa(g)*exp(-g^2/T) +
    g^5*(1-θ)*(1-γ)*Qc(g)*exp(-g^2/T)
end

Ω11(x,sigma,T) = 1/T^3*quadgk(g-> dΩ11(x,g,sigma,T) ,0 , 10, abstol=1e-3)[1]






function X2(x)
    aΩ11 = zeros( lenR )
    for i in lenR
        aΩ11[i] = afΩ11[i](x)
    end
    sum((aΩ11-aΩ11e).^2)
end


for R in aRe
    tic()
     sigma=0.388/R
     rc0,bc0,gc0 = rc0bc0gc0(sigma)
     println("sigma = $sigma, rc0 = $rc0, bc0 = $bc0, gc0 =  $gc0")
     Qa = interpQa(sigma)   # retorna a função Qa(g) para este sigma
     Qb = interpQb(sigma)   # retorna a função Qb(g,α) para este sigma
     Qc = interpQc(sigma)   # retorna a função Qc(g) para este sigma
     push!(afΩ11, x->Ω11(x,sigma,T))   # array de funções
     toc()
end

xi = ones( lenR )

   # for i in 1:length(aΩ11e)
   #     sigma=0.388/aR
   #     T=0.0987
   #     R=aRe
   #    minx = Optim.minimizer(optimize(x -> (Ω11(x,sigma,T) - aΩ11e[i])^2, [1.3, 2.50], NelderMead()))
   #    println(minx)
   # end

 x, X2min = NelderMead(x->X2(x), [2.50;17.2])
#       #println(x[1],"   ",x[2],"    ",x[3])
       println(x[1],"   ",x[2])

see my comment … thanks

Dear @leaoshark,

I am reluctant to help you further for the following reasons, which I hope you will agree with:

  • the code is becoming too long and you are asking me to search the Internet and fill in the code for brent.jl
  • it is not clear what quantity you want to minimize: in your previous email, you write “and i want to know for what x[1] and x[2] coefficient , dΩ11(x,g,sigma,T) = aΩ11e”, but now it is the integral of dΩ11;
  • you write that Ω11 = quadgk ( g-> dΩ11, 0, 10) but you say that “dΩ11 is not a calculation, only a definition”: I do not know what that means;
  • I cannot prove that your integral is finite and if the Optimi.jl’s optimization routine fails, I would have still no clue whether the culprit is Optim.jl or the integrand.

If you need further help, please make a minimal example of the code (20-30 lines at most), check that the integrand makes sense (starting this thread with the wrong claim that MATLAB finds the solution does not help your cause) and feel free to post again with a detailed explanation.

michele

1 Like

Hello Michele … thnaks for your time …

i think now you know what i want … for which values of x[1] and x[2] , Ω11 = aΩ11e ( with Ω11 = quadgk ( g-> dΩ11, 0, 10) ) … the Ω11 integral converges for g near 1.5 … it is a perfect integral, no problems … like i said before, neldermead.jl works, but give wrong answers … the same problem in matlab, using fminsearch, works and give best answers, but take days and days to run … so, now you know what i want, can you rewrite a few lines to implement X2(x) to run in optim.jl ??

thanks

I want you to succeed here so please read carefully.

Give a complete example. By that I mean provide us with a function that takes in a vector and spits out a scalar. This is the objective. You have failed to do so so far.

Follow our instructions. We’ve told you 4-5 or even 6 times how to use Optim.jl. Please state what you don’t understand instead of posting walls and walls of code that doesn’t even run.

Please keep the discussion in one place. Opening an issue at Optim is not the right thing to do. Following the instructions here is.

Listen, I get it. I’ve been where you are, where everything is hard to understand, but it is crucial that you follow our advise. If you don’t, you end up discouraging people from helping you.

3 Likes

Look, now i have a kind of minimal example, and if i understand this, i will can put in my case

for what value of x[1] and x[2], A = A_exp

but i do not now how to put A in a vector, because A= A(R) and R has many values …

using Optim, PyPlot

A_exp = [ 1 2 3 4 5]
R = [1 0 5 7 8]

function dA(x,g)

  exp(x[1]*R^2*x[2]*g)
end

A = quadgk(g-> dA, 0 , 10)[1]

function X2(x)

  (A - A_exp).^2

end

optmize(X2,[0.0;0.0])

println(x[1],"  ",x[2])
plot(R,A_exp)
plot(R,A)

now i have a minimal example … can you help me ? i already write it here

Nagging people because they have not responded to your MWE for 1 minute is a bit rude.

Your code has a few fundamental problems:

  1. you need to be using QuadGK for quadgk.
  2. A_exp and R end up as column matrices, are you sure that this is what you want, not vectors?
  3. R^2 will not work.
  4. quadgk(g-> dA, 0 , 10)[1] will not work (did you mean g -> dA(x, g)? but where is the x then? are you trying to define an A(x) function?)

It is apparent that you have basic problems with understanding the language. Reading through the first few chapters of the manual, then some tutorials, would be a good starting point.

This is cross posted which is making everyone duplicate their efforts. I gave a response here that matches a lot of what @Tamas_Papp says:

https://github.com/JuliaNLSolvers/Optim.jl/issues/482#issuecomment-341153635

But yes, a lot of this is just basic problems with not understanding programming (not Julia), and so I would recommend reading the manual on defining functions. You have to pass functions their arguments, otherwise it’s not clear what is even meant!

I find it problematic that @leaoshark has been asked not to cross-post multiple times, but has not given up doing it. While I generally prefer light-handed moderation, I consider this an abuse of the goodwill of the community.

1 Like

Like @Tamas_Papp and @ChrisRackauckas, I too find your behaviour not appropriate: posting an issue on Optim.jl is not correct.

The solution posted by @ChrisRackauckas is what you want, here expanded to do what you probably have in mind. The only place where Optim fails to find a minimum is for R=0, your second entry, because the integrand is a constant. This is the same reason the starting point for your optimization should be something different than [0,0].

using QuadGK
using Optim
using PyPlot

A_exp = [1.0, 2, 3, 4, 5]
A_opt = zeros(A_exp)
R = [1.0, 0, 5, 7, 8]

for i = 1:length(R)
    dA(x,g) = exp(x[1]*R[i]^2*x[2]*g)
    A(x) = quadgk(g -> dA(x,g), 0 , 10)[1]
    X2(x) = (A(x) - A_exp[i])^2
    opt = optimize(X2, [0.01, 0.01])
    println(Optim.minimizer(opt))
    A_opt[i] = Optim.minimum(opt)
end

plot(R, A_exp)
plot(R, A_opt)

I sent a warning and reduced the trust level to 0, so the next few posts will be moderated until sufficient trust is regained.

Hello. Thanks a lot …

but i want to know only one x[1] and only one x[2] who gives the best fit for all values of R …

not one x[1] and x[2] for each R …

the answers, for a tiny change of starting point, gives this

[0.809457, -1.23549]
[0.01, 0.01]
[0.0901656, -0.141833]
[0.0572312, -0.0795807]
[0.0513825, -0.0484605]


[0.809457, -1.23549]
[0.01, 0.01]
[0.0901656, -0.141833]
[0.0572312, -0.0795807]
[0.0513825, -0.0484605]

[1.30697, -0.765069]
[0.5, 0.5]
[0.775147, -0.0164976]
[0.621272, -0.00733098]
[0.306876, -0.00811434]


I do not know what are this answers ... I only want ONE x[1] and only one x[2] who satisfy X2 ... optim.jl can do that ?

I have changed the function dA(x,g) to compute a vector of 5 elements, one entry for each element of R. The integration is done element-wise, each element of the vector g->dA(x,g) is integrated in the range [0,10]. Optim then minimizes the norm 2 squared of the vector difference A(x) - A_exp.

As for your point of a tiny change of starting points giving you different values of x, the problem is that your integrand depends on the product x[1]*x[2]: the optimal value remains constant throughout the different runs as you can check. Indeed your problem is univariate.

using QuadGK
using Optim
using PyPlot

A_exp = [1.0, 2, 3, 4, 5]
R = [1.0, 0, 5, 7, 8]

dA(x,g) = exp.(x[1]*R.^2*x[2]*g)
A(x) = quadgk(g -> dA(x,g), 0 , 10)[1]
X2(x) = sum(abs2, A(x) - A_exp)
opt = optimize(X2, [0.01, 0.01])
println(Optim.minimizer(opt))
A_opt = Optim.minimum(opt)
1 Like