"Switching" out the attached `Optimizer` of an (empty) model

I’m trying to do the following, in order to “parameterize” a model

function make_parametric!(model::JuMP.Model)
    if !isempty(model)
        @error "[core] Model must be empty when trying to switch to parametric mode"
    end

    # TODO: Remember/copy all optimizer attributes.
    
    if JuMP.mode(model) == JuMP.DIRECT
        # this does not work
        JuMP.set_optimizer(model, () -> POI.Optimizer(eval(Symbol(JuMP.solver_name(model))).Optimizer()))
    else
        JuMP.set_optimizer(model, () -> POI.Optimizer(eval(Symbol(JuMP.solver_name(model))).Optimizer()))
    end
end

m = Model(() -> HiGHS.Optimizer())
dm = direct_model(HiGHS.Optimizer())

make_parametric!(m)  # works
make_parametric!(dm) # does not work

which fails, since we can’t switch Optimizers on models in direct mode. Is there a work around? Or is there a completely different way to do this?

1 Like

You cannot switch the solver in direct-mode, and there is no work-around.

Instead of trying to parameterize an existing model, you should ask the user for an optimizer contractor, and wrap that.

Something like:

function make_parametric(optimizer)
    Model(() -> MOI.Optimizer(MOI.instantiate(optimizer; with_bridge_type = Float64)))
end

This might save you from doing an eval(...), but I’m not sure what you’re aiming for overall.

using JuMP
import ParametricOptInterface
const POI = ParametricOptInterface
import HiGHS

function make_parametric_copy(model::JuMP.Model)
    m = Model(() -> backend(model))
    set_optimizer(m, () -> POI.Optimizer(backend(model)))
    return m
end

dm = direct_model(HiGHS.Optimizer())
pm = make_parametric_copy(dm)
1 Like

Hmm. I guess this works, if the model is empty. @blegat might want to comment. It looks fine, but the only issue is that your model and m now share the same memory, so you might have to be a bit careful.

Currently our workflow looks like this:

  1. User creates a JuMP model, using a model mode, as well as optimizer of their choice
  2. User sets some optimizer attributes, or overall flags (like silent, string_names, …)
  3. Our tool builds “into” that model, solves it, and properly extracts results
  4. Potential re-solves, or other shenanigans

I assume this ((1.) and (2.)) does not work with

which is why I was trying to parameterize it “behind the scenes” (therefore also my open copy optimizer attributes TODO).

Furthermore, the main problem is that our exposed functionality looks something like build_and_run!(model, ...), where the proposed make_parametric_copy(...) needs to return the “new” m so one would have to change every code using it to model = build_and_run(model) (which in theory is fine, we return the model anyways for convenience, but most people using the tool actually only use the in-place formulation, so for them it would be a breaking change).


Essentially I’m aiming for:

  1. take a model as function argument
  2. depending on some setting either
    a. run with it
    b. switch to a POI wrapped optimizer, and then run with it
  3. make sure that whatever happens in that function actually modifies the passed model since that is what the “outside” is using

That is fine (I think). We didn’t previously expect a completely empty model, just something that does not clash with our internals, but changing that and checking isempty(model) is a reasonable “loss”.

Is there anything specific that comes to your mind where this could be dangerous?

Stupid question…

Assuming a completely empty model (except “settings”), can I do

_dm = direct_model(POI.Optimizer(backend(dm)))
dm.moi_backend = _dm.moi_backend

for a given direct model dm, or those this furiously break something in the background?

I think the answer to all of this is yes, it probably works and you can do it. But we’ve never tested or documented this at the JuMP level so it might break in future.

1 Like

Then I’ll try that and see if something somewhere clashes with that. And if it’s changed in some future version, it’s a single point of failure that our tests can easily catch on a new JuMP version; at which point I can still come back and see if there is another work around before upping our dependency.

Thanks @all (as always)!

1 Like

Can’t you even directly do dm.moi_backend = POI.Optimizer(backend(dm)) ?
Note dm.shapes is based on constraint indices of dm.moi_backend. If you add a layer, the indices might change in general so it might cause issues. But shapes are mostly used for PSD constraints and POI maybe does not change indices so it might not be an issue. You could add some asserts for that to be on the safe side.

Tried it, and yes that works perfectly!