Issue with memoization with JuMP for NLP when adding @NLObjective

Hello,

I’ve been following this tutorial from JuMP on memorization for NLP via JuMP with user-defined, vector-output functions
One issue I am having currently is that once I have a memorized function registered to the model with

register(model, :foo_1, 2, memoized_foo[1]; autodiff = true)

I would like to add the objective by splatting the inputs, like

@NLobjective(model, Max, foo_1(x...))  # doesn't work

instead of

@NLobjective(model, Max, foo_1(x[1], x[2]))   # works

The motivation for this is that my NLP would typically have hundreds of variables, so it would be very impractical to write x[1], x[2], ... for all the variables…

Below is a minimum example that I am trying to work with. I am on Julia 1.8.5 with JuMP v1.9.0.
I’d appreciate any suggestions, thank you in advance!

using JuMP, Ipopt

function_calls = 0
function foo(z...)
    x = z[1]
    y = z[2]
    global function_calls += 1
    common_term = x^2 + y^2
    term_1 = sqrt(1 + common_term)
    term_2 = common_term
    return term_1, term_2
end

"""
    memoize(foo::Function, n_outputs::Int)

Take a function `foo` and return a vector of length `n_outputs`, where element
`i` is a function that returns the equivalent of `foo(x...)[i]`.

To avoid duplication of work, cache the most-recent evaluations of `foo`.
Because `foo_i` is auto-differentiated with ForwardDiff, our cache needs to
work when `x` is a `Float64` and a `ForwardDiff.Dual`.
"""
function memoize(foo::Function, n_outputs::Int)
    last_x, last_f = nothing, nothing
    last_dx, last_dfdx = nothing, nothing
    function foo_i(i, x::T...) where {T<:Real}
        if T == Float64
            if x != last_x
                last_x, last_f = x, foo(x...)
            end
            return last_f[i]::T
        else
            if x != last_dx
                last_dx, last_dfdx = x, foo(x...)
            end
            return last_dfdx[i]::T
        end
    end
    return [(x...) -> foo_i(i, x...) for i in 1:n_outputs]
end

# memoize function
memoized_foo = memoize(foo, 2)

# build model
model = Model(Ipopt.Optimizer)
lx = [-1,-1]
ux = [ 1, 1]
x0 = [0.2, 0.3]
@variable(model, lx[i] <= x[i=keys(lx)] <= ux[i], start=x0[I])

# register function
register(model, :foo_1, 2, memoized_foo[1]; autodiff = true)

# add registered function to model
#@NLobjective(model, Max, foo_1(x[1], x[2]))     # works but tedious to write
@NLobjective(model, Max, foo_1(x...))                # doesn't work...
1 Like

This is an interesting one. The minimal reproducer is:

julia> using JuMP

julia> lx = [1, 1]
2-element Vector{Int64}:
 1
 1

julia> model = Model();

julia> @variable(model, x[i = keys(lx)])
1-dimensional DenseAxisArray{VariableRef,1,...} with index sets:
    Dimension 1, [1, 2]
And data, a 2-element Vector{VariableRef}:
 x[1]
 x[2]

julia> register(model, :foo_1, 2, (x...) -> sum(x); autodiff = true);

julia> @NLobjective(model, Max, foo_1(x...))
ERROR: MethodError: no method matching to_shape(::Tuple{LinearIndices{1, Tuple{Base.OneTo{Int64}}}})
Closest candidates are:
  to_shape(::Tuple{}) at abstractarray.jl:752
  to_shape(::Tuple{Vararg{Int64, N}} where N) at abstractarray.jl:753
  to_shape(::Tuple{Vararg{Union{Integer, AbstractUnitRange}, N}} where N) at abstractarray.jl:754
  ...
Stacktrace:

It’s because JuMP doesn’t know how to splat (x...) a DenseAxisArray.

There are a couple of work-arounds.

One is to use

julia> args = Array(x)
2-element Vector{VariableRef}:
 x[1]
 x[2]

julia> @NLobjective(model, Max, foo_1(args...))

julia> print(model)
Max foo_1(x[1], x[2])
Subject to

The other is to create a vector instead of a DenseAxisArray:

model = Model(Ipopt.Optimizer)
lx = [-1,-1]
ux = [ 1, 1]
x0 = [0.2, 0.3]
@variable(model, lx[i] <= x[i=eachindex(lx)] <= ux[i], start=x0[i])
register(model, :foo_1, 2, memoized_foo[1]; autodiff = true)
@NLobjective(model, Max, foo_1(x...))

or

model = Model(Ipopt.Optimizer)
lx = [-1,-1]
ux = [ 1, 1]
x0 = [0.2, 0.3]
@variable(model, lx[i] <= x[i=1:length(lx)] <= ux[i], start=x0[i])
register(model, :foo_1, 2, memoized_foo[1]; autodiff = true)
@NLobjective(model, Max, foo_1(x...))

The problem is really calling keys(lx) on a vector instead of eachindex.

I’ll take a look to see if we can get the original case to work.

1 Like

I’ve opened a PR which should fix the original case: [Nonlinear] fix splatting when collection does not support reverse by odow · Pull Request #2120 · jump-dev/MathOptInterface.jl · GitHub.

1 Like

For now I will work with eachindex, thank you!

1 Like