 # Help with Performance in JuMP

Hi,

I am trying to solve a maximization problem using Gurobi/JuMP and I would really appreciate it if you could help me speed things up. Some background on the problem, essentially I am trying to maximize the expectation of a function by sampling it. I am going to find the solution for every sampled outcome, and then I am going to force the solutions to be the same (nonanticipativity constraint).

I am sampling K different outcomes of the random variable which in the example is price and solving

\max_{I(\omega^{k})\forall k} \frac{1}{K} \sum_{k=1}^{K} F(I(\omega^{k}),\omega^{k})

subject to

I(\omega^{1})=I(\omega^{2})=I(\omega^{3})=\cdots=I(\omega^{K})

This is a MWE of the problem:

using DelimitedFiles,DataFrames,GLM,FixedEffects, ExcelReaders, LinearAlgebra, Random
using Distributions, BitOperations
using Discretizers,Plots
using JuMP, Gurobi, BenchmarkTools

###############################################################
# AUX FUNCTIONS
###############################################################

###############################################################
# Generating Data
###############################################################

#Model Parameters

J=67 #number of nodes#
K = 500

price=rand(J,K)
F=rand(J)

function problem_iter(price,σ,K,F)
price = price.^(1-σ)
model = Model(Gurobi.Optimizer)
@variable(model, I[1:J,1:K], Bin)
@variable(model, Ipost[1:J,1:K], Bin)

set_silent(model)

# Objective Function

@objective(model, Max, 1/K*(sum( (price[:,k]'*Ipost[:,k])  - F'*I[:,k] for k=1:K)))

# Constraint 1 and 2 - non-anticipativity constraint and ex-post choice
for k=1:K
@constraint(model, I[:,k] .== sum(I,dims=2)/K)
@constraint(model, Ipost[:,k]'*Ipost[:,k] == 1)
@constraint(model, sum(Ipost[:,k].*I[:,k] ) == 1)
end

optimize!(model)

Iᵒ =  value.(I)
Iᵒ = trunc.(Int,Iᵒ)
Ipost =  value.(Ipost)
Ipost = trunc.(Int,Ipost)

return Iᵒ, Ipost
end

TEST=@time problem_iter(price,2,K,F)


I am getting the following warning that for sure is diminishing performance, but I am not sure on how to handle

┌ Warning: The addition operator has been used on JuMP expressions a large number of times. This warning is safe to ignore but may indicate that model generation is slower than necessary. For performance reasons, you should not add expressions in a loop. Instead of x += y, use add_to_expression!(x,y) to modify x in place. If y is a single variable, you may also
use add_to_expression!(x, coef, y) for x += coef*y.


I think the first thing you could do is to put prints before each JuMP command so you can check when the message is triggered. If it is in the @objective, at a @constraint, or only at the optimize! (what can, unfortunately, be the case if these calls are deferred to just before they are really necessary). If this does not help to pinpoint the origin of the problem, I would comment out things one by one until the message disappears.

Hi @Henrique_Becker, apparently it is on the Objective! Any suggestions?

Nonetheless, it seems to me that the bulk of the time is being spent on the constraints.

Using the following makes it much faster,

#Model Parameters

J=67 #number of nodes#
K = 10000

price=rand(J,K)
F=rand(J)

function problem_iter(price,σ,K,F)
price = price.^(1-σ)
model = Model(Gurobi.Optimizer)
@variable(model, I[1:J,1:K], Bin)
@variable(model, Ipost[1:J,1:K], Bin)
@variable(model, Itrue[1:J])

set_silent(model)

# Objective Function

println("Objective")
@objective(model, Max, 1/K*(sum( (price[:,k]'*Ipost[:,k])  - F'*I[:,k] for k=1:K)))

println("Constraints")
# Constraint 1 and 2 - non-anticipativity constraint and ex-post choice
for k=1:K
@constraint(model, I[:,k] .== Itrue)
@constraint(model, Ipost[:,k]'*Ipost[:,k] == 1)
@constraint(model, sum(Ipost[:,k].*I[:,k] ) == 1)
end

println("Optimize")
optimize!(model)

Iᵒ =  value.(I)
Iᵒ = trunc.(Int,Iᵒ)
Ipost =  value.(Ipost)
Ipost = trunc.(Int,Ipost)
Iopt =  value.(Itrue)

return Iᵒ, Ipost, trunc.(Int,Iopt)
end

TEST=@time problem_iter(price,2,K,F)


Still any suggestions on how to make this faster are more than welcome!

I was not able to spot the difference, what did you change?

I think one thing that may help performance (but I did not test it) is not referring to global variables. K is fine in your example because you are referring to the K that you get by parameter (even if the value is the same bound to a global variable) but J and F are used inside the function too, but not received by parameter, what can cause serious performance problems. Avoid global variables is literally the first tip on the Performance Tips section of the manual. If you cannot pass them by parameter you should at least make them const (like in const J = 67), and never change them.

I created a new auxiliary variable named Itrue, and replaced it for the sum of I[:,k] which I was computing at each iteration.

Yes, will take a look at that. Thanks!

Does anyone have any further suggestions?

Why do you need I and Itrue? I think you just need something like the following:

using JuMP
function problem_iter(price,σ,K,F)
price = price.^(1-σ)
model = Model()
@variable(model, Ipost[1:J, 1:K], Bin)
@variable(model, Itrue[1:J], Bin)
set_silent(model)
@objective(
model,
Max,
1/K * sum(price[j, k] * Ipost[j, k] - F[j] * Itrue[j] for j = 1:J, k = 1:K)
)
for k=1:K
@constraint(model, sum(Ipost[j, k] * Ipost[j, k] for j = 1:J) == 1)
@constraint(model, sum(Ipost[j, k] * Itrue[j] for j = 1:J) == 1)
end
return model
end

J=67
K = 10000
price=rand(J,K)
F=rand(J)
TEST=@time problem_iter(price,2,K,F)