Running JuMP using Ipopt in Parallel


I am trying to run several different JuMP models at once in parallel and I have a MWE, and was curious if anyone has any suggestions or thoughts on what I am doing. A serial approach (where only a single type of model is run) would look like:

# serial approach
using JuMP, Ipopt
m = Model()
@variable(m, x, start = 0.0)
@variable(m, y, start = 0.0)
c = @constraint(m, x + y == 10)
@NLobjective(m, Min, (1-x)^2 + 90(y-x^2)^2)
s = solve(m)

runNum = 10
tp = zeros(runNum)
for R in 1:runNum
    # update parameters
    r = rand(1)[1]
    m = Model()
    @variable(m, x, start = 0.0)
    @variable(m, y, start = 0.0)
    c = @constraint(m, x + y == 10)
    @NLobjective(m, Min, (1-x)^2 + 90(y-x^2)^2)
    s = solve(m)

Then to run different models in parallel, I defined par.jl as:

using JuMP, Ipopt

type Info

# fucntion to initialize the model
function init(obj)
    print("In init() with proc: ",myid(),"\n")

    m = Model()
    @variable(m, x, start = 0.0)
    @variable(m, y, start = 0.0)
    c = @constraint(m, x + y == 10)
    if obj == :A
        @NLobjective(m, Min, (1-x)^2 + 90(y-x^2)^2)
        @NLobjective(m, Min, (1-x)^2 + 100(y-x^2)^2)
    s = solve(m)
    I = Info(m,s,c)
    return I

# function to rerun optimization with new parameter
function update(I,r)
 print("\n In update() with proc: ",myid(),"\n")
 status = solve(I.mdl)
 return status

Then to run the parallel approach I first fire up julia with julia -p 4 and run the script:

@everywhere include("par.jl")

m1 = @spawn init(:A)
m2 = @spawn init(:B)
m3 = @spawn init(:C)
m4 = @spawn init(:D)

runNum = 10
tp = zeros(runNum)
for R in 1:runNum
  # update parameters
  r = rand(1)[1]

  u1 = @spawn update(fetch(m1),r)  # doing fetch inside function call is OK
  u2 = @spawn update(fetch(m2),r)
  u3 = @spawn update(fetch(m3),r)
  u4 = @spawn update(fetch(m4),r)

  r1 = fetch(u1)
  r2 = fetch(u2)
  r3 = fetch(u3)
  r4 = fetch(u4)

  tp[R] = toc()

Comparing the results for all runs, it can be seen that even though (I thought that the init() function already performed a solve() the first run is very high. Any ideas why? I am passing data properly? Should I be doing something differently? If not I do not mind this, I just cannot explain it.

Then, cutting off the first run so that the final ones can be seen

As I would expect even though we are running multiple jobs, since they are in parallel they run faster.

So, what I am wondering from this:

  1. is there anything that I can do to improve this?
  2. Should I set this up as a cluster even though I do not need a complicated scheme as my nodes do not share information
  3. Assuming that I do not need to formally set up a cluster, in terms of managing these nodes, I would just be interested in if an :Optimal solution was obtained and if more than one have an :Optimal solution what is the best :cost. Then I would use data from that node for somthing else



I’m not sure why this is that slow though I want to suggest a different approach here which is more scaleable in my opinion and doesn’t use these low level functions.

You might wanna try pmap which is explained here:

You can copy the code and work with it to update it on your behalf.

One problem in your current code is that the runs must finish to start the next. You are solving the same over and over again basically so it shouldn’t matter in your test case but this might be interesting in the future.

Hope this helps even though I can’t tell you why yours seem to be slow.


@Wikunia thank you for your help with this!

I should have been a little more clear in what I am doing, I am setting this up to do model predictive control (MPC), so the R = 1:runNum is there to emulate solving the problem over a receding time horizon.

So, for instance (assuming that someone is not familiar with MPC); I will solve a problem that determines the control for a particular amount of time, t_prediction. Then send part of that solution (0:0.001:t_execution < t_prediction) to the plant (vehicle/robot) and start solving again. Basically, the issue that I have is that if my model cannot solve in less than t_execution, then the plant has so control solution to execute. The other thing is that I can have different models (m1,m2,m3,m4) that are trying to solve determine the same control signal and some of them will be faster than others and again some may not even solve in time (in which case, maybe I would have to it and get it ready to try and solve the next problem), so I am building redundancy (in case one model does not solve or does not solve in time) and if the I have more than one solution to choose from I can determine which one is best based off of some metric.

So, with this, I do not want to have @sync because these things may not solve in time ( I would probably still have a while() loop though. Something like while toc() < t_execution. With this, I have another question
4) If the solver does not return in time how should I stop the process so that it is ready for the next problem?


In your case I would consider using the time limit option of Ipopt then:


Yes, thank you for the share. I use that option already, but there is no guarantee that when the solution is any good.

Also, I would like to be safe and have a time limit on the solver and then stop the process if my master script decides it is taking too long


Hmm okay yeah that might be a bit more complicated. I don’t know how to do that atm. Maybe you can ask a new question for this :wink: Probably someone can help you out.


@Wikunia do you know what may be happening here

Again, if I have four models (different from the model given above) running in parallel with my current code (logic above), and I compare the sum of the solve times counted by the workers to the total time elapsed, counted in the for loop then I get:

Then neglecting the first one

It can be seen that these solvers only seems to be noticeably running in parallel on run 3 and run 4, but other than that it looks like things are actually running in series! I would expect the sum of all of the workers times to be much larger than the time the for loop counts (Total Solve Times). Any thoughts? Thanks again!

@miles.lubin is it possible to run Ipopt in parallel like this through JuMP?


Can you try some problems which are a bit bigger or are the things you actually want to compute of this size? I don’t expect any overhead of communication as you are kind of not sending anything to the other processors but just to be sure. It is definitely possible to run Ipopt in parallel. I’m using it in a solver:


@Wikunia that is good to hear about Ipopt, I will increase the complexity

Cool looking solver by the way!


One thing that I noticed is that if I put tic() and toc() around each fetch() command, I get:

elapsed time: 0.361224525 seconds
elapsed time: 0.000291062 seconds
elapsed time: 0.000267836 seconds
elapsed time: 0.00024762 seconds

Where the fetch() command{Channel} waits until there is a solution ready, which makes sense why the first one takes the longest, by that time the others are done as well.


Increasing the size of the problem does lead to a more noticeable difference.

@Wikunia good suggestion!

But, still not as much as I might expect in some cases…

Additionally, it is strange that the time for the first iteration is now much smaller


I don’t know how you run your tests. If you use julia -p 4 and then always include the test file you don’t reload JuMP and Ipopt every time -> it is faster in the first iteration.


@Wikunia thank you for checking that for me. I am running my tests exactly how I described above, the scripts and the procedure are the same. It is possible that we are running the test differently, but I am not sure how. Are you running it multiple times? Each time I want to run it I close julia and reopen it. Also, I am running many programs on my machine so it is not very fair/


The runs after the first took less than 0.01s on my machine. Therefore I suggested the longer runs. Yours still seem to be not that long but there is already a speed up of around two.

Hope you have an actual quad core and not as I have: hyperthreading :joy: