JuMP and Ipopt optimization: How to reduce run time for iterative optimization?

Hi everyone! I am seeking help to speed up my code using JuMP and Ipopt. My current approach is too slow for my needs. :grinning:

My original code looks something like this:
‘’‘julia
for i = 1:N
optimizer = optimizer_with_attributes(Ipopt.Optimizer, “print_level” => 0)
model = Model(optimizer)
model = build_optimization_problem(case, data, model)
result = optimize!(model)
data= data+result[:]
end
‘’’
So at every iteration, I am building the optimization model, solving it, and updating the data with the results. I’m building the optimization problem on a separate file because it’s a big problem and it remains unchanged during the whole loop. The only thing that changes is the data seen by the objective function.
In order to speed up the loop, I decided to build the optimization problem only one time in the beginning and only update the objective at every iteration. Sth like this:

optimizer = optimizer_with_attributes(Ipopt.Optimizer, “print_level” => 0)
model = Model(optimizer)
model = build_optimization_problem(case)
for i = 1:N
@NLobjective(model, Min, sum(parameters-data))
result = optimize!(model)
end

However, this approach has shown almost no time decrease. There is some decrease in the number of allocations (10%) but almost none in time(3%). I’m hoping to get some suggestions for how to further speed up the loop.

Thank you for your help in advance!

Hi there, welcome to the forum.

You’ve stumbled upon a long-standing issue with JuMP’s nonlinear interface: Fast resolves for NLP · Issue #1185 · jump-dev/JuMP.jl · GitHub.

The problem is that JuMP currently throws away the inner Ipopt model every solve and rebuilds the derivative information. So a slight reduction and allocations but not in time is what I would expect.

We have some plans in process to fix this (by rewriting how we build nonlinear programs, https://github.com/jump-dev/MathOptInterface.jl/pull/2059), but it’s still a while away from being released.

Hi @odow ,

Thank you for your swift reply.

I see! Thanks for the information. It’s good to know that there are some behind-the-scenes plans to find a solution.

In the meantime, are there any effective techniques to speed up the solver? Something like warm starts, proper initialization, parallelizing the code maybe?

I am currently trying to solve a relaxed version of the problem before the actual problem. So in each iteration of the loop, I’m solving two problems. The first one with the hard constraints slightly relaxed, and the second one with no relaxation but initializing the variables based on the first one.

This approach yields more feasible solutions but it also slows down the code significantly.

I just saw @ccofrin was looking into this issue. Did he reach any conclusions?

Cheers!

are there any effective techniques to speed up the solver?

Something like warm starts, proper initialization

It sounds like you’re already using set_start_value for th variables?

parallelizing the code maybe?

Your code makes it seem like the iterations are serial? If data = data + result[:] then you cannot run the loop in parallel.

If that’s not the case, take a read of Parallelism · JuMP.

You should also use a better linear solver than MUMPS: GitHub - jump-dev/Ipopt.jl: Julia interface to the Ipopt nonlinear solver. For big problems that can have a large difference.