Hi everyone! I am seeking help to speed up my code using JuMP and Ipopt. My current approach is too slow for my needs.
My original code looks something like this:
‘’‘julia
for i = 1:N
optimizer = optimizer_with_attributes(Ipopt.Optimizer, “print_level” => 0)
model = Model(optimizer)
model = build_optimization_problem(case, data, model)
result = optimize!(model)
data= data+result[:]
end
‘’’
So at every iteration, I am building the optimization model, solving it, and updating the data with the results. I’m building the optimization problem on a separate file because it’s a big problem and it remains unchanged during the whole loop. The only thing that changes is the data seen by the objective function.
In order to speed up the loop, I decided to build the optimization problem only one time in the beginning and only update the objective at every iteration. Sth like this:
optimizer = optimizer_with_attributes(Ipopt.Optimizer, “print_level” => 0)
model = Model(optimizer)
model = build_optimization_problem(case)
for i = 1:N @NLobjective(model, Min, sum(parameters-data))
result = optimize!(model)
end
However, this approach has shown almost no time decrease. There is some decrease in the number of allocations (10%) but almost none in time(3%). I’m hoping to get some suggestions for how to further speed up the loop.
The problem is that JuMP currently throws away the inner Ipopt model every solve and rebuilds the derivative information. So a slight reduction and allocations but not in time is what I would expect.
I see! Thanks for the information. It’s good to know that there are some behind-the-scenes plans to find a solution.
In the meantime, are there any effective techniques to speed up the solver? Something like warm starts, proper initialization, parallelizing the code maybe?
I am currently trying to solve a relaxed version of the problem before the actual problem. So in each iteration of the loop, I’m solving two problems. The first one with the hard constraints slightly relaxed, and the second one with no relaxation but initializing the variables based on the first one.
This approach yields more feasible solutions but it also slows down the code significantly.
I just saw @ccofrin was looking into this issue. Did he reach any conclusions?