Integer Optimization: performance help

Hello,

I am working with an integer optimization problem and I am looking for ways to speed the code up. The math problem is the following

I am sampling K different outcomes of the random variable \omega^{k} which in the example is msimul and solving

\max_{I(\omega^{k})\forall k} \frac{1}{K} \sum_{k=1}^{K} F(I(\omega^{k}),\omega^{k})

subject to

I(\omega^{1})=I(\omega^{2})=I(\omega^{3})=\cdots=I(\omega^{K})

Find below a MWE

using JuMP, BenchmarkTools, Gurobi, Random, Distributions

S = 100 

J = 77 

Ι = 150  #this is an Iota \Iota #

Random.seed!(12)

σ = 3

ϕ = 1

κ = 0.95 

mtemp = rand(Normal(0,1),Ι,J,S)

msimul = exp.(mtemp)

β = ones(J)

F=rand(Ι)*100000

function problem(msimul,σ,F,J,Ι,κ,ϕ,β)

        m = (msimul).^(1-σ)

        model = Model(Gurobi.Optimizer)

        K = copy(S)

        @variable(model, Ipost[1:Ι,1:J,1:K], Bin)

        @variable(model, Itrue[1:Ι], Bin)

    

        set_silent(model)

    

        # Objective Function

    

        println("Objective")

        @objective(model, 

                    Max, 

                    ϕ/K*sum(β[j]*(m[:,j,k]'*Ipost[:,j,k])   for j=1:J  for k=1:K) - F'*Itrue )

        println("Constraints")

        # Constraint 1 and 2 - non-anticipativity constraint and ex-post choice

        for k=1:K

            for j=1:J

            @constraint(model, Ipost[:,j,k]'*Ipost[:,j,k] == 1)

            @constraint(model, sum(Ipost[:,j,k]'*Itrue ) == 1)

            end

        end

    

        println("Optimize")

        optimize!(model)

    

        Ipost =  value.(Ipost)

        Ipost = trunc.(Int,Ipost)

        Iopt =  value.(Itrue)

    

        return trunc.(Int,Iopt), objective_value(model)

end 

solution = @btime problem(msimul,σ,F,J,Ι,κ,ϕ,β)

I am looking for ways to speed the code up

The JuMP construction or the problem solve? It looks like you’re solving the extensive formulation here, which grows large as you scale. You might want to look into something like progressive hedging to decompose this problem.

There are a few related packages:

1 Like

Thanks so much! I am looking for ways to improve in both fronts. Basically, for each realization of \omega I am choosing the I(k) that maximizes the sum and then I am making the choices to be the same across simulations k by using a non-anticipativity constraint.

Perhaps I am picking this maximum in an inefficient way?

or perhaps there is a better way to enforce the constraints?

I will definitely take a look at the packages you are suggesting.

I don’t really understand what your nonanticipativity constraints are doing, or why they are quadratics.

You could consider doing this binary * binary to linear reformulation:

Thanks,

Basically the non-anticipativity constraint is telling Julia to choose the same vector in every state of the world k. This is the second constraint.

The first constraint is saying, for each state of the world k I can choose one thing in each j.

I thought that doing these as dot products could save some time, hence the quadratics.

I thought that doing these as dot products could save some time

You need to reconsider your formulation. I don’t fully understand, but perhaps something like:

# For each j, k, choose 1 i
@constraint(model, [k=1:K, j=1:J], sum(Ipost[:, j, k]) == 1)
# Nonanticipativity
@constraint(model, [k=1:K, j=1:J], Ipost[:, j, k] .== Itrue)

But this still isn’t right. Because why do you have Ipost if just has to be Itrue? Why not just put Itrue everywhere?

Rather than try to improve your code, you need to step back and formulate the problem on paper first.

@odow , thanks so much. Your suggestion greatly improved performance (\approx 30x) with this small correction in the second constraint. I am not asking for the whole vector Ipost[:,j,k] to be equal to Itrue, the constraint is that the only entry of Ipost[:,j,k] that is equal to 1 must be 1 in Itrue as well.

# For each j, k, choose 1 i

        @constraint(model, [k=1:K, j=1:J], sum(Ipost[:, j, k]) == 1)

        # Nonanticipativity

        @constraint(model, [k=1:K, j=1:J], Itrue-Ipost[:, j, k] .>= 0 )

Now, I am forcing that the entry of one in Ipost[:,j,k] must be among the entries equal to one in Itrue. However, if you can think of a better way to write the constraint I would appreciate it.

The reason I cannot plug-in Itrue instead of Ipost is that I am allowing Itrue to have more than one entry equal to 1. However, for each j only one entry of Itrue might enter in the sum in the objective function.

One way to think about the problem is that you are a firm that want to serve many markets, but for doing that, the firm needs to et up distribution centers. The firm will serve each market from the distribution center that maximizes the objective function only. However, which distribution center is chosen depends on the realization of \omega^{k}

1 Like

On a new iteration of this problem, I am using the following code. If there are any obvious performance problems or opportunities to improve I would pretty much appreciate if you could point them at me!

Is there a way to reduce the number of allocations?

using JuMP, BenchmarkTools, Gurobi, Random, Distributions

S = 200 
J = 50 
Ι = 300  #this is an Iota \Iota #
Random.seed!(12)
σ = 3
ϕ = 1
κ = 0.95 
mtemp = rand(Normal(0,1),Ι,J,S)
msimul = exp.(mtemp)
β = ones(J)
F=rand(Ι).*1000


function problem(msimul,σ,F,J,Ι,κ,ϕ,β)
	m = (msimul).^(1-σ)
	model = Model(Gurobi.Optimizer;bridge_constraints = false)
	K = copy(S)
	@variable(model, Ipost[1:Ι,1:J,1:K], Bin)
	@variable(model, Itrue[1:Ι], Bin)

	set_silent(model)

	# Objective Function

	println("Objective")
	@objective(model, 
				Max, 
				ϕ/K*sum(β[j]*(m[:,j,k]'*Ipost[:,j,k])   for j=1:J  for k=1:K) - F'*Itrue )
	println("Constraints")
	# Constraint 1 and 2 - non-anticipativity constraint and ex-post choice
	# For each j, k, choose 1 i
	@constraint(model, [k=1:K, j=1:J], sum(Ipost[:, j, k]) == 1)
	# Nonanticipativity
	#@constraint(model, [k=1:K, j=1:J, i=1:Ι], Itrue[i]-Ipost[i, j, k] >= 0 )
	@constraint(model, [k=1:K, j=1:J], Itrue-Ipost[:, j, k] .>= 0 )

	println("Optimize")
	optimize!(model)

	Ipost =  value.(Ipost)
	Ipost = trunc.(Int,Ipost)
	Iopt =  value.(Itrue)

	return trunc.(Int,Iopt), objective_value(model)
end	

solution = @time problem(msimul,σ,F,J,Ι,κ,ϕ,β)

 70.410313 seconds (160.77 M allocations: 10.773 GiB, 6.50% gc time, 0.41% compilation time)

somebody?

A couple of things:

  • What happens if you time the function twice? The first time, Julia has to compile everything.
  • Use model = direct_model(Gurobi.Optimizer())
  • Things like m[:,j,k] create copy in Julia. You may be better writing explicit summations, e.g.,
    @constraint(model, [k=1:K, j=1:J], sum(Ipost[i, j, k] for i in 1:I) == 1)
    

somebody?

Please remember that people on this forum are volunteering their time to help you, and response times can vary depending on time-zones, weekends, etc. (In general, I try to read or reply to every post in the Optimization section. Your original post was at 03:00 New Zealand time…)

1 Like

Hi! Thanks so much for the reply!

Are the inequality constraints costlier to solve than equality constraints? I suppose they are.

I think there is margin for improvement in the second constraint. The constraint I am trying to impose is that if Ipost is equal to 1 for a given, j and k, then Itrue must be equal to 1 in that entry.

Maybe there is a better way to do this.

Are the inequality constraints costlier to solve than equality constraints?

No.

I think there is margin for improvement in the second constraint.

@constraint(model, [i=1:I, k=1:K, j=1:J], Itrue[i] >= Ipost[i, j, k])

Other things you can try:

  • Provide a better starting point with set_start_value. If you know a good integer solution, that can help Gurobi.