# Maximize the maximum of expressions

Hey everyone,

I would like to maximize the maximum of a vector that is a function of my optimization variables.

Basically, I want to do the following, but in a “JuMP way”.

``````@objective(model, Max, max(my_vec...))
``````

On the tips and tricks site in the JuMP docs there’s a similar trick for doing this with the “min of min”. But as it is already stated there, this trick cannot easily be used for a max of max.

Maybe someone knows a trick for implementing this?

Best
Jannes

there’s a similar trick for doing this with the “min of min”

It’s actually the “max of min” or “min of max”. Doing min-min or max-max is hard.

The correct way to approach this really depends on what the rest of your model is, so can you share more details?

Here are some options:

``````# Option 1: non-smooth non-convex nonlinear
model = Model(Ipopt.Optimizer)
@variable(model, y[1:3])
@objective(model, Max, max(y...))

# Option 2a: binary variables
model = Model(Gurobi.Optimizer)
@variable(model, y[1:3])
@variable(model, z[1:3], Bin)
@constraint(model, sum(z) == 1)
@objective(model, Max, sum(z[i] * y[i] for i in 1:3))

# Option 2b: big-M
model = Model(HiGHS.Optimizer)
@variable(model, y[1:3])
@variable(model, z[1:3], Bin)
@variable(model, y_z[1:3])
M = 1_000
@constraint(model, sum(z) == 1)
# z[i] = 1 => y_z[i] == y[i]
@constraint(model, [i in 1:3], y_z[i] <= y[i] + M * (1 - z[i]))
@constraint(model, [i in 1:3], y_z[i] >= y[i] - M * (1 - z[i]))
# z[i] = 0 => y_z[i] = 0
@constraint(model, [i in 1:3], y_z[i] <= M * z[i])
@constraint(model, [i in 1:3], y_z[i] >= M * z[i])
@objective(model, Max, sum(y_z[i] for i in 1:3))

# Option 2c: indicators
model = Model(Gurobi.Optimizer)
@variable(model, y[1:3])
@variable(model, z[1:3], Bin)
@variable(model, obj)
@constraint(model, sum(z) == 1)
@constraint(model, [i in 1:3], z[i] --> {obj == y[i]})
@objective(model, Max, obj)
``````
4 Likes

Yes, you’re right. The site shows the min of max…

Essentially, I have a function `g(...)` of some optimization variables and I want to maximize it across different dimensions and stages of the optimization variables.
So something like

``````@variable(model, X[1:3, 1:100])
my_vec = [g(x) for x in reshape(X, 3*100, 1)]
@objective(model, Max, max(my_vec...))
``````

The function `g` is convex in most cases, but I would like a general way of implementing this.
Thanks for your help so far. I will take a closer look at the options you already sent as soon as I can

Jannes

I more meant: what is the rest of your model? Is it linear? Are there integers? What solver are you using?

But yeah. Try some options and happy to answer more questions.

1 Like

What the difference between option 1 and 2c, 2c is faster ?

They’re different formulations.

Ipopt doesn’t support the `z --> {}` indicator constraints, and Gurobi doesn’t support the nonlinear `max`.

I had tried using `max(my_vec...)`with Ipopt before and remember running into problems with that. It just never finds a solution for my problem and reaches the maximum number of iterations even after (appropriately) increasing the number of allowed iterations.

I quite like the binary variable/indicator constraint options. They are easy to understand and seem to work fine for what I’m doing. Also, wow, Gurobi is fast. I only installed it now to exactly recreate your code snippets and will use it a lot more now!

Thanks a lot!

Can you use a smooth relaxation, such as softmax?

remember running into problems with that. It just never finds a solution for my problem and reaches the maximum number of iterations

This can happen because the function is non-smooth and non-convex. Ipopt assumes the functions are smooth.

1 Like

Just to complete the conversation, another way of solving this is running a separate maximization problem for each element of `my_vec` and then comparing the results.

``````@objective(model, Max, my_vec[1])
optimize!(model)
@objective(model, Max, my_vec[2]
optimize!(model)
# ...
``````

This is not what I was looking for, so thanks again @odow, but it is of course also a possible way of solving this.

1 Like