Passing an array of variables to a user-defined non-linear function

I am trying to optimize a multivariate function with JuMP, but can’t find a way to pass the arguments.

The current code is:

using JuMP                                                                                                                                                                                                                                                                                                                 
n=100                                                                                                                                                              
m=Model()                                                                                                                                                                                 
@variable(m, 0 <= x[1:n] <= 1)                                                         
f(x)  = rand()   # doenst matter here                                                                                                                                                        
df(x) = rand(n)  # "                                                                                                                                                                                                                                                                                                                                                            
JuMP.register(m, :obj, n, (x...)->f(x), (g,x...)->(g[:] = df(x)))
# and now the line in question:                                                                                                                                                                                                                                                                                                                                                                                                                                                                
@NLobjective(m, Max, obj(x))                                                                                                                                                                                                                                                                                                                                  
solve(m)

No matter how I try denoting the NLobjective, I get the error

ERROR: Incorrect number of arguments for "obj" in nonlinear expression.
 in error(::String) at ./error.jl:21

I also tried obj(x…) but this lead to the same result.

How would I specify this the correct way?

I tried methods(obj) and got nothing, is it (obj) a part of the JuMP module?
You also have :obj, which kind of reads as a column header(?)

I wouldn’t have thought … is valid code


using JuMP

function f(x...)
  rand(length(x))
  end;

m = model()
param = 100;
@variable(m, 0 <= pvars[1:param] <= 1)
JuMP.register(m, :f, param, f, f)
@NLobjective(m, Max, f(x...))
solve(m)

I define obj as a JuMP-specific user-defined function via the register call (c.f. jump docs)

Concerning the splatting operator ... you might want to look up the julia docs.

Anyone else got some ideas?

Do you need to define the obj function ?

  • Edit: You’ve registered the :obj symbol and passed an anonymous definition but there is no function named obj outside the jump model, where you’re calling obj(x)

  • II: from the jump docs

This syntax is not supported: Nonlinear Modeling — JuMP -- Julia for Mathematical Optimization 0.17 documentation

All expressions must be simple scalar operations. You cannot use dot, matrix-vector products, vector slices, etc.

If all you’re doing is minimizing an unconstrained function, JuMP does very little for you. I’d recommend using Optim or calling Ipopt directly.

Perhaps what you are looking for can be achieved with @eval. For example, taking n = 3, the code looks like:

m = Model(solver=IpoptSolver(print_level=0))
n=3
@variable(m, 0 <= x[1:n] <= 1)
f(x...)  = rand()
df(g,x...) = g[:] = rand(n)
JuMP.register(m, :obj, n, f, df)

The problematic @NLobjective could be specialized for n=3 as:

@NLobjective(m, Max, obj(x[1],x[2],x[3]))

and this would work. To make it work for other n defined at runtime, we could build-up this expression and @eval it, as follows:

@eval @NLobjective(m, Max, $(Expr(:call, :obj, [Expr(:ref,:x,i) for i=1:n]...)))

This works, but I’m not really sure, it’s the way to solve the underlying optimization problem.

That’s also valid, but I would only recommend it to people who understand what’s going on there.

@Dan: Thank you for the metaway :slight_smile:
Beeing a macro though, i hoped that @NLobjective was able to do that transformation itself, do you think this might be worth a ticket?

If all you’re doing is minimizing an unconstrained function, JuMP does very little for you. I’d recommend using Optim or calling Ipopt directly.

I also have linear constraints so Ipopt would be the way to go.
I already implemented the optimization using NLopt but was hoping to find a more optimizer-agnostic formulation using JuMP (although it by now only uses Ipopt for these problems).

I suppose MathProgBase is the other way to go then?

Do I need to define all the methods for AbstractNonlinearModel as in the docs?

How does JuMP handle nonlinear problems using Ipopt without the supply of the second derivatives (in the Ipopt docs these are necessary)?

Further reading helped :slight_smile:

Ipopt also includes a L-BGFS hessian approximation. This is enabled automatically using MathProgBase if the :Hess feature is not enabled.

So far for the theory, lets see if I can get it running!

Good news: The MathProbBase approach works :smile:

Bad news: When trying your JuMP eval suggestion I get the error

ERROR: LoadError: UndefVarError: m not defined

Interpolating it with $m leads to the next error

ERROR: LoadError: UndefVarError: x not defined

where I don’t know how to continue, since my ‘meta’ is rather rusty :wink:

Guess I’ll stick to the MathProgBase solution for now.

Glad to hear MathProgBase is working. It is a cleaner way to go (and probably more robust to changes). As for @eval, it works in the global scope, and thus would not find m and x if they are defined locally in a function. It is mainly a work-around, and also points the way to and the fact, that meta-programs i.e. code generation can sometimes solve thorny problems, especially syntactic problems.

A nicer workaround for this has been proposed at Julia+JuMP: variable number of arguments to function - Stack Overflow. It uses the raw expression input format instead of macros.

1 Like