Hi everybody! I am trying to improve the performance of a conventional Linear Programming Problem which I am solving by calling Gurobi through JuMP. A MWE is the following
using JuMP, Ipopt, Optim, LinearAlgebra, Random, Distributions, Gurobi, BenchmarkTools
Ι = 500
J = 50
const GUROBI_ENV = Gurobi.Env()
function solve_model()
model = Model(with_optimizer(Gurobi.Optimizer, GUROBI_ENV); add_bridges = false)
end
#Create a vector of productivities
z=exp.(rand(Normal(0,1),Ι))
# Create a Matrix of Distances
function distmat(J,Ι)
Distances = zeros(J,Ι)
Random.seed!(7777)
coordinates_market = 100*rand(J,2)
coordinates_plant = 100*rand(Ι,2)
for j = 1:J, l=1:Ι
Distances[j,l] = sqrt((coordinates_market[j,1]-coordinates_plant[l,1])^2+(coordinates_market[j,2]-coordinates_plant[l,2])^2)
end
return 1 .+ Distances./100
end
τ = distmat(J,Ι)
function solving_min_constraints_primal(Cap,Q,τ,z)
(J,Ι) = size(τ)
primal_capacity = solve_model()
set_silent(primal_capacity)
#set_optimizer_attribute(primal_capacity, "nlp_scaling_method", "none")
@variable(primal_capacity, x[1:J,1:Ι] >= 0)
@objective(primal_capacity, Min, sum((τ[:,i]'*x[:,i])/z[i] for i=1:Ι) )
for j=1:J
@constraint(primal_capacity, sum(x[j,:]) == Q[j])
end
for i=1:Ι
@constraint(primal_capacity, τ[:,i]'*x[:,i] <= Cap[i])
end
optimize!(primal_capacity)
return objective_value(primal_capacity), value.(x)
end
res_primal = @btime solving_min_constraints_primal(ones(Ι).*1.1,ones(J),τ,z)
92.057 ms (701849 allocations: 56.02 MiB)
Is there any performance recommendation to improve the speed of this code? Is there a way to reduce the number of allocations as much as possible? Are there any gains from vectorizing stuff and writing things in terms of matrices (my own attempts seem to suggest that no)? Although, I have not explored if there is an improvement related to the Sparsity of the constraints.
Any help would be very much appreciated!