I’m trying to do the following, in order to “parameterize” a model
function make_parametric!(model::JuMP.Model)
if !isempty(model)
@error "[core] Model must be empty when trying to switch to parametric mode"
end
# TODO: Remember/copy all optimizer attributes.
if JuMP.mode(model) == JuMP.DIRECT
# this does not work
JuMP.set_optimizer(model, () -> POI.Optimizer(eval(Symbol(JuMP.solver_name(model))).Optimizer()))
else
JuMP.set_optimizer(model, () -> POI.Optimizer(eval(Symbol(JuMP.solver_name(model))).Optimizer()))
end
end
m = Model(() -> HiGHS.Optimizer())
dm = direct_model(HiGHS.Optimizer())
make_parametric!(m) # works
make_parametric!(dm) # does not work
which fails, since we can’t switch Optimizers on models in direct mode. Is there a work around? Or is there a completely different way to do this?
This might save you from doing an eval(...), but I’m not sure what you’re aiming for overall.
using JuMP
import ParametricOptInterface
const POI = ParametricOptInterface
import HiGHS
function make_parametric_copy(model::JuMP.Model)
m = Model(() -> backend(model))
set_optimizer(m, () -> POI.Optimizer(backend(model)))
return m
end
dm = direct_model(HiGHS.Optimizer())
pm = make_parametric_copy(dm)
Hmm. I guess this works, if the model is empty. @blegat might want to comment. It looks fine, but the only issue is that your model and m now share the same memory, so you might have to be a bit careful.
User creates a JuMP model, using a model mode, as well as optimizer of their choice
User sets some optimizer attributes, or overall flags (like silent, string_names, …)
Our tool builds “into” that model, solves it, and properly extracts results
Potential re-solves, or other shenanigans
I assume this ((1.) and (2.)) does not work with
which is why I was trying to parameterize it “behind the scenes” (therefore also my open copy optimizer attributes TODO).
Furthermore, the main problem is that our exposed functionality looks something like build_and_run!(model, ...), where the proposed make_parametric_copy(...) needs to return the “new” m so one would have to change every code using it to model = build_and_run(model) (which in theory is fine, we return the model anyways for convenience, but most people using the tool actually only use the in-place formulation, so for them it would be a breaking change).
Essentially I’m aiming for:
take a model as function argument
depending on some setting either
a. run with it
b. switch to a POI wrapped optimizer, and then run with it
make sure that whatever happens in that function actually modifies the passed model since that is what the “outside” is using
That is fine (I think). We didn’t previously expect a completely empty model, just something that does not clash with our internals, but changing that and checking isempty(model) is a reasonable “loss”.
Is there anything specific that comes to your mind where this could be dangerous?
I think the answer to all of this is yes, it probably works and you can do it. But we’ve never tested or documented this at the JuMP level so it might break in future.
Then I’ll try that and see if something somewhere clashes with that. And if it’s changed in some future version, it’s a single point of failure that our tests can easily catch on a new JuMP version; at which point I can still come back and see if there is another work around before upping our dependency.
Can’t you even directly do dm.moi_backend = POI.Optimizer(backend(dm)) ?
Note dm.shapes is based on constraint indices of dm.moi_backend. If you add a layer, the indices might change in general so it might cause issues. But shapes are mostly used for PSD constraints and POI maybe does not change indices so it might not be an issue. You could add some asserts for that to be on the safe side.