Gradient not updating in call to optimize (Optim.jl)

The problem is you need to fill G with the gradient.
Basically, your function will either receive G = nothing, or G as a vector.
Think of a vector as like a bucket, a container that holds contents.

Think of it like this. If Optim wants gradients, it hands fg! a bucket (G) and expects fg! to fill that bucket with gradient values.
What your fg! is doing instead upon receiving the bucket is filling a different bucket instead.

Observe:

julia> g = [100.0, 200.0, 300.0];

julia> fg!(0,g,y,storage)
7.253886037615595

julia> g'
1×3 adjoint(::Vector{Float64}) with eltype Float64:
 100.0  200.0  300.0

The smallest change to your program to make it work is replacing G = gradloss(...), which replaces your local G with another one, is to instead copy the values gradloss returns into G with a .= (notice the dot):

julia> function fg!(F,G,𝜹,storage)
           #Retreive quantities out of storage
           J = storage["J"]
           x = storage["x"]
           y = x[:,:y]
           jlo = x[:,:jlo]
           jhi = x[:,:jhi]

           #Do common calculations
           ŷ = m(x,𝜹)

           if G != nothing
               G .= gradloss(ŷ,y,jlo,jhi,J)
           end
           if F != nothing
               return loss(ŷ,y)
           end
       end
fg! (generic function with 1 method)

julia> fg!(0,g,y,storage)
7.253886037615595

julia> g'
1×3 adjoint(::Vector{Float64}) with eltype Float64:
 1.06837  -0.730916  1.44107

Anyway, your gradients are wrong so this still won’t work.
Removing unnecessary type annotations, and then

julia> using ForwardDiff

julia> ForwardDiff.gradient(x -> fg!(0,nothing,x,storage), y)
3-element Vector{Float64}:
  0.7388941121629948
 -0.06398338695769656
 -0.11786820640342854

julia> g
3-element Vector{Float64}:
  1.0683716323419565
 -0.7309162697393076
  1.4410658967982857
5 Likes