using "max" in nonlinear constraints

question

#1

Hi there,
I’m trying to use the max function in one of my NLconstraint. Following @ odow’s advice, I use auto-differentiation to allow max into the constraint. The problem is that the length of the array that I want to take max over is not fixed and depend on problem instances, so ideally I want to be able to write

m = Model(solver=IpoptSolver(print_level=0))
f(array) = maximum(array)
JuMP.register(m, :f, 1, f, autodiff=true)
...
@variable(m, x[1:T])
@variable(m, y)
@NLconstraint(m, y == f(x))

However, due to the restriction to scalar operation in NLconstraint, right now I’m forced to write, e.g. when L = 2:

m = Model(solver=IpoptSolver(print_level=0))
f(a,b) = max(a,b)
JuMP.register(m, :f, 2, f, autodiff=true)
...
@variable(m, x[1:T])
@variable(m, y)
@NLconstraint(m, y == f(x[1],x[2]))

and for different L, the code will have to change. Any idea to help me write proper code?

Thanks!!!


#2

Note that maximum(array) is equivalent to reduce(max, array). Not sure if it solves your problem though.


#3

Thanks for the comment, but it doesn’t exactly solve my problem. I think the main problem is in the restriction of NLconstraint that doesn’t allow vector input, so I can’t use a maximum(array) or reduce(max, array) syntax, but instead have to use
max(array[1],array[2],...,array[L])
It’s nice that max can take any number of input, but is there any syntax that allows me to pass a variable number of inputs into the max function, just like the ... sign above.


#4

@zhangxz1123, you’re looking for the splatting syntax (note: I’m not sure if this works on JuMP v0.18, which it looks like you’re using. It does on the latest JuMP v0.19.):

using JuMP, Ipopt
m = Model(with_optimizer(Ipopt.Optimizer))
N = 4
@variable(m, x[i=1:N] >= i)
f(y...) = max(y...)
JuMP.register(m, :f, N, f, autodiff=true)
@NLobjective(m, Min, f(x...))
optimize!(m)

#5

@odow, this is neat, thank you! I’m trying to update my JuMP package, but seems to have some problem with it. My current JuMP version is v0.18.5, but doing Pkg.update() doesn’t update JuMP to v0.19. Below is the output that I get

  Updating registry at `~/.julia/registries/General`
  Updating git-repo `https://github.com/JuliaRegistries/General.git`
 Resolving package versions...
  Updating `~/.julia/environments/v1.0/Project.toml`
 [no changes]
  Updating ~/.julia/environments/v1.0/Manifest.toml`
 [no changes]

Any clue why?


#6

Looks like JuMP is lower bounded Julia at 1.1. So you need to update Julia to 1.1 first.


#7

The lower bound is Julia v1.0, to Julia v1.1, see


#8

You might have a package that does not support JuMP v0.19 yet, you can try

] add JuMP@v0.19.0

and see the error you get


#9

@Azamat, @blegat, thanks! Upgrading Julia to 1.1 indeed solves the problem.


#10

@odow, a follow-up question on splatting syntax. It seems that currently the splatting only works on the variable itself. However, in my problem, the variable is saved as a matrix, but the max is only taken over one column of the variable, e.g.

using JuMP, Ipopt
m = Model(with_optimizer(Ipopt.Optimizer))
M=5
N=2
@variable(m, x[1:M,1:N])
f(y...) = max(y...)
JuMP.register(m, :f, N, f, autodiff=true)
@NLobjective(m, Min, f(x[1,:]...)
optimize!(m)

But this is not allowed. I tried to make f act on x itself, but didn’t get it to work. Is there any elegant solution?


#11
y = x[1,:]
@NLobjective(m, Min, f(y...)

#12

In fact, I need to compute f(x[i,:]...) for all i=1:M, so does explicitly claim a variable y still work? It seems that I need to claim M explicit variables in this case.