Optim.jl single iteration

I can’t seem to find this anywhere and I’m sorry in advance if my trivial question is indeed elsewhere…

I have a differentiable periodic function:

function fitness(x::Vector)
    LV = ones(105)
    QV = ones(105,10)
    Qx = zeros(size(QV,1))
    V = dot(LV,(1 .- cos.(LinearAlgebra.mul!(Qx,QV,x))))
    return V
end

with gradient

function g!(x::Vector, storage::Vector)
    LV = ones(105)
    QV = ones(105,10)
    Qx = zeros(size(QV,1))
    storage::Vector = vcat([dot(LV,QV[:,i] .* sin.(LinearAlgebra.mul!(Qx,QV,x))) for i ∈ 1:size(QV,2)]...)
end

and running

res = optimize(fitness,g!,zeros(10) .+0.01,method = LBFGS())

gives a single iteration and then Optim.minimizer(res) just spits out a 10-element vector with 0.01 as every entry. What am I doing wrong here? My actual problem is more complicated than this (the LV and QV have the shapes as above, but the values of LV have huge hierarchies – I initially thought this was the issue, but it seems the optimization doesn’t run even for this MWE – and the QV are made up of integers in -10:10). Would be really grateful if someone could point me in the right direction here!

PS I tried to include

function h!(x::Vector, storage::Matrix)
    LV = ones(105)
    QV = ones(105,10)
    Qx = zeros(size(QV,1))
    storage::Matrix = hcat([[dot(LV,QV[:,i] .* QV[:,j] .* cos.(LinearAlgebra.mul!(Qx,QV,x))) for i ∈ 1:size(QV,2)] for j ∈ size(QV,2)]...)
end

and use method = Newton() but that threw the error

no method matching h!(::Matrix{Float64}, ::Vector{Float64}

and I don’t really understand why! Thanks in advance :slight_smile:

Your arguments to h! are backwards, no?

Note that this does not update the variable storage like it’s supposed to, it rather creates a new variable named storage. Try using .= to instead update the existing variable storage.

Oh dear! I was reading an older version of the documentation… doh! Thanks so much!

Great catch! Thank you, this indeed works now :slight_smile:

1 Like