# Maximize Min Objective

I am working on a convex optimization problem in Julia using JuMP and Pajarito. I have three objectives that I will solve for independently. For the third objective how can I maximize the min?

 @NLobjective(model, Max, sum(b_dict[r] * (p_dict[r] * l_dict[r] / l__dict[r]) for r in od ))
@NLobjective(model, Max, prod((p_dict[r] * l_dict[r] / l__dict[r])^(b_dict[r]) for r in od))
@NLobjective(model, Max, minimum((p_dict[r] * l_dict[r] / l__dict[r]) for r in od ))


Hi @sophiepavia,

A couple of questions:

• Do you really mean Pajarito? It doesn’t support @NLobjectives. Instead, it is intended for mixed-integer conic programs.
• Are you sure your problem is convex? It has some scary looking terms there.
• What is p_dict, l_dict, l__dict, etc? Since this is your first post, take a read of Please read: make it easier to help you. It has some tips for writing minimal reproducible examples.
• The typical way to maximize a min is to add new variables
@objective(model, Max, min(a, b, c))

# becomes

@varaible(model, t)
@objective(model, Max, t)
@constraint(model t <= a)
@constraint(model t <= b)
@constraint(model t <= c)

2 Likes

Hmm. It’s still a little unclear. Can you create a minimal reproducible example that I can copy-and-paste? It’s okay to use the @NLobjective for now, but we need to know all of the variables and all of the data.

In particular, I don’t know how l and l_ correspond to l^* and \bar{l}.

Without other information though, something like x * y / z is not convex. What makes you think it is? Are there specific constraints in P?

To use Pajarito, you really need to formulate the problem as a mixed-integer conic program using convex cones.

If I understand your 0/0 constraint correctly, l__dict[r] is l_dict[r] if y[r] = 1, and M if y[r] = 0.

You can rewrite that in terms of the ratio as l_dict[r] / l__dict[r] is 1 if y[r] = 1, and l_dict[r] / M if y = 0.

For your first and third objectives, that results in a mixed-integer quadratic program, which you can solve with something like Gurobi. (I might have made a mistake, but hopefully this points you in the right direction)

using JuMP, Gurobi
od = [(2, 1), (3, 1), (1, 2), (3, 2), (1, 3), (2, 3)]
M = 1e4
b = Dict(od .=> 1)
p = Dict(od .=> 1)
model = Model(Gurobi.Optimizer)
@variables(model, begin
y[od] >= 0, Bin
l[od] >= 0
ratio[od] >= 0 # represents l[i] / l_[I]
end)
# l_ = {l if y = 1, M if y = 0}
# means
# l / l_ = {1 if y = 1, l / M if y = 0}
@constraint(model, [r in od], ratio[r] == y[r] + l[r] / M * (1 - y[r]))
@objective(model, Max, sum(b[r] * p[r] * ratio[r] for r in od))
optimize!(model)


But the problem is unbounded, so I guess there are other constraints you are missing.

The second objective is a little more troublesome, but I’d leave that until you have a formulation that works.

1 Like

Ooops. It was just a carry-over from me editing your previous model. You don’t need explicit 0-1 bounds on binary variables.

1 Like

So Gurobi only supports linear and quadratic terms, but you’re going to have something like x * y * z.

What does this objective represent? It’s a bit nasty, and it isn’t convex.

Ah. If all the b_dict are 1, then you have a few options, and you could think up a variety of reformulations.

With Gurobi, you could use the GeometricMeanCone: Standard form · MathOptInterface

@variable(model, t)
x = [p[r] * ratio[r] for r in od]
@constraint(model, vcat(t, x) in MOI.GeometricMeanCone(length(od) + 1))
@objective(model, Max, t)


but when I ran it, I encountered numerical issues, likely because of the big-M constant.

You could also take logs and solve the MINLP directly, using Juniper:

using JuMP, Juniper, Ipopt
model = Model(
optimizer_with_attributes(
Juniper.Optimizer,
"nl_solver" => optimizer_with_attributes(Ipopt.Optimizer),
),
)
# ...
@NLobjective(model, Max, sum(log(p[r] * ratio[r]) for r in od))


You could also try other things, like a lazy constraint cutting plane method:

@variable(model, t[od] <= M)
@objective(model, Max, sum(t))
function my_callback(cb_data)
for r in od
t_r = callback_value(cb_data, t[r])
ratio_r = callback_value(cb_data, ratio[r])
x = p[r] * ratio_r
f = log(x)
if t_r > f + 1e-6
con = @build_constraint(t[r] <= f + 1 / x * (ratio[r] - ratio_r))
MOI.submit(model, MOI.LazyConstraint(cb_data), con)
return
end
end
return
end
MOI.set(model, MOI.LazyConstraintCallback(), my_callback)
optimize!(model)

2 Likes

Thank you! This look great.

However, b_dict will not always be all 1s.

Those were dummy testing values.

I utilized Juniper but changed the line

@NLobjective(model, Max, sum(b[r] * log(p[r] * ratio[r]) for r in od))


to account for when b[r] != 1

1 Like