Hi all, I am trying to implement an algorithm in JuMP. In each iteration, I will need to solve several models in parallel, update some data and then resolve these models in the next iteration with different objectives (constraints and variables are the same). What I am thinking is that I set up models without objectives beforehand, and then in each iteration I use pmap to set new objectives and resolve. I don’t want to rebuild models from scratch in each iteration. I am wondering is this doable and supported in JuMP? Thanks.

I am wondering is this doable and supported in JuMP?

There is nothing “in” JuMP that supports this. However, it does support changing the objective of a model and efficiently resolving the problem.

However, you can use Julia’s parallel features to parallelize the solves. It is worth having a read of some related posts as there are quite a few traps you can run into:

https://discourse.julialang.org/search?q=jump%20parallel

Have a read, try some things out, and then come back if you run into problems.

It sounds pretty similar to what I used in Juniper.jl

So you basically load the basis model to all processors and then use `remotecall_fetch`

to call the processors with information like the new objective and get that back. All in a `pmap`

function (`@sync, @async`

loop )

Don’t hesitate to comment here if you need some more information.

More about the pmap function

One constraint to be aware of is that JuMP `Model`

objects aren’t designed to support serialization. `pmap`

ping over a list of models to solve generally won’t work, because that involves sending the model across processes. You should try to build a model locally once per process.

Thanks, I am looking into this!

Thanks! That’s helpful!

Thanks for the reply. Can you please specify how to

Say I have 3 models and 3 processes, can I do something like:

```
@everywhere using JuMP
@everywhere using Ipopt
function generateModel()
mList = Array(JuMP.Model,3)
for i in 1:3
m = Model(solver = IpoptSolver(print_level = 0))
@variable(m, x[1:2], lowerbound = -1, upperbound = 1)
a = rand(2)
@constraint(m, dot(a, x) == 0)
mList[i] = m
end
return mList
end
@everywhere function solveModel(m::JuMP.Model)
c = rand(2)
@objective(m, Min, dot(c,m[:x]))
s = solve(m)
return getobjectivevalue(m)
end
mList = generateModel()
addprocs(2)
for i in 1:3
rr = RemoteChannel(i)
put!(rr, mList[i])
end
bList = Any[]
@time for k in 1:100
b = Dict(i=>0.0 for i in 1:3)
@sync begin
for i in 1:3
b[i] = solveModel(mList[i])
end
end
push!(bList, b)
end
```

When I tried to put each model into a process, I had

```
LoadError: On worker 2:
UndefVarError: JuMP not defined
```

In addition, can this make sure in every iteration(1:100), the model i will be solved in process i? I am also a little confused about when to use @parallel after @sync.

Thank you!

Hi. Thank you for the reply. I am trying to workout a simple example:

```
addprocs(2)
@everywhere using JuMP
@everywhere using Ipopt
@everywhere function generateModel()
m = Model(solver = IpoptSolver(print_level = 0))
@variable(m, x[1:2], lowerbound = -1, upperbound = 1)
a = rand(2)
@constraint(m, dot(a,x) == 0)
return m
end
@everywhere function solveModel(m::JuMP.Model)
c = rand(2)
@objective(m, Min, dot(c, m[:x]))
solve(m)
return getobjectivevalue(m)
end
mList = Array{JuMP.Model}(3)
@sync for i in 1:3
mList[i] = remotecall_fetch(generateModel, i)
end
```

I think I am generating models in each process, and my first question is : am I generating models in a parallel way?

In addition, I am a little confused about the following different approaches to solve the model

(a)

```
sol = Array{Float64}(3)
@time @sync for i in 1:3
sol[i] = remotecall_fetch(solveModel,i,mList[i])
end
```

(b)

```
sol = Array{Float64}(3)
@time @sync @parallel for i in 1:3
sol[i] = remotecall_fetch(solveModel,i,mList[i])
end
```

©

```
sol = Array{Float64}(3)
@time @sync for i in 1:3
sol[i] = remote_do(solveModel,i,mList[i])
end
```

(d)

```
sol = pmap(solveModel, mList)
```

Anyone can help explain the difference of the four approaches? Thanks!

It looks like you’re adding procs after `using JuMP`

and declaring functions. The procs you add are not able to use anything declared with @everywhere before the proc is added.

Thanks for the reply. Yes you are right. I just realized that. I changed the code in the previous reply.

Hi. Could you please explain what is happening in the while loop marked by the marco @async in the map function? Thanks

You have basically the `@sync`

loop over the processors and the `@async`

loop over the tasks. At the beginning you get the new task (for you a new objective from a list) and break if the list is empty. This is all done by the master processor. Then use `remotecall_fetch`

function to send the task to the processor. Do you need more information?

I see. So assume I have equal number (n) of models and processes, and I want each process to take care of one model. Can I do something like

```
@sync for i in 1:n
@async result[i] = remotecall_fetch(solveModel, i, modelList[i])
end
```

where `modelList`

is an array of `n`

JuMP models.

In addition, what I need is that in each iteration of my algorithm, I need to implement the above code (in parallel) to resolve n models. But each process will always handle the same model. Is it necessary to build and store each model in a process (say with the same id), or is it find to create all models in the main process (with id 1)?

Thanks.