Distributed Optimization | Working with multiple JuMP models

Hi All,
What is the best way to build a distributed optimization framework with JuMP?

  • I have global variables associated with the central problem.
  • I have local copies of the central variables associated with each subproblem.

I created a central model(M) and local models(m[subproblem]) corresponding to the central and local problems.

Each subproblem is supposed be solved within its own model. However, inside each subproblem, I need to couple the local variables to the central variables to make sure that all local variables converged at a central solution as

@constraint(m[subproblem],  x == M[:x])

since M[:x] does not belong to the model m VariableNotOwned{VariableRef}{x} .

If I create one single model with all proper indexing of central and subproblems, would it be possible to solve one model with subsolutions? or Is it the best way to implement distributed optimization?

You can’t take variables from one model, and use them in another. You need to write some sort of decomposition framework.

It sounds like you may want something like GitHub - plasmo-dev/Plasmo.jl: A Platform for Scalable Modeling and Optimization.