JuMP optimization with vector input and analytical gradient

In JuMP’s manual, there are simple instructions to either optimize by providing an analytical gradient function or by providing a vector input with splatting: Nonlinear Modeling · JuMP. I have been trying to combine the two but my efforts have been in vain thus far. Is it possible to do it? I am specifically trying to minimize a sum of Rosenbrock functions and specifically using JuMP for benchmarking. But just a small toy example to get me started would be appreciated. Thanks in advance.

Docs:
https://jump.dev/JuMP.jl/stable/manual/nlp/#Multivariate-functions

I guess you want something like the following (I have not run, there may be typos, etc.):

f(x...) = (x[1] - 1)^2 + (x[2] - 2)^2
function ∇f(g::Vector{T}, x::T...) where {T}
    g[1] = 2 * (x[1] - 1)
    g[2] = 2 * (x[2] - 2)
    return
end
model = Model()
register(model, :my_square, 2, f, ∇f)
@variable(model, x[1:2] >= 0)
@NLobjective(model, Min, my_square(x...))

You should also read:
https://jump.dev/JuMP.jl/stable/background/should_i_use/#Black-box,-derivative-free,-or-unconstrained-optimization
There are other tools in Julia that may be more suited if you have an unconstrained problem.

Thanks a lot for the quick answer. Yes I’ve been trying variations around your suggestion, but I ran into the same error message:

ERROR: MethodError: no method matching ∇f(::SubArray{Float64, 1, Vector{Float64}, Tuple{UnitRange{Int64}}, true}, ::Float64, ::Float64)
Closest candidates are:
  ∇f(::Vector{T}, ::T...) where T at REPL[5]:1
Stacktrace:
  [1] (::JuMP.var"#148#151"{typeof(∇f)})(g::SubArray{Float64, 1, Vector{Float64}, Tuple{UnitRange{Int64}}, true}, x::SubArray{Float64, 1, Vector{Float64}, Tuple{UnitRange{Int64}}, true})
...

But actually, you solved my problem in another way, because I’ve realized that indeed, NLopt is much better suited for my needs so that’s what I’ll be using. So no need to persist trying to do it with JuMP.

2 Likes

Ah. That’s a bug in the documentation. The example should read (note the AbstractVector):

using JuMP, Ipopt
f(x...) = (x[1] - 1)^2 + (x[2] - 2)^2
function ∇f(g::AbstractVector{T}, x::T...) where {T}
    g[1] = 2 * (x[1] - 1)
    g[2] = 2 * (x[2] - 2)
    return
end
model = Model(Ipopt.Optimizer)
register(model, :my_square, 2, f, ∇f)
@variable(model, x[1:2] >= 0)
@NLobjective(model, Min, my_square(x...))
optimize!(model)
1 Like

I opened an issue: User-defined gradients need to accept AbstractVector · Issue #2638 · jump-dev/JuMP.jl · GitHub. Apologies for the confusion!

1 Like

It worked! thanks a lot for the help.

2 Likes