I’m trying to rework how “model parameters” work in my model. I know that

using JuMP
import HiGHS
model = direct_model(HiGHS.Optimizer())
p = @variable(model)
fix(p, 1.0; force=true)
x = @variable(model)
@constraint(model, x >= p)
@objective(model, Min, x + 0)
optimize!(model)
fix(p, 10.0; force=true)
optimize!(model)

works for a “somewhat low” overhead parametric RHS.

On the other hand, p * x obviously creates a QuadExpr, which prevents using it for the LHS. Is there any similar trick that comes at less performance loss than going for ParametricOptInterface.jl or ParameterJuMP.jl?

Thanks @jd-foster! Unfortunately I know about this, and while it is the “perfect solution” from a performance point of view, it is not really usable for large scale models - without significant “manual” work (that I was hoping to overcome with some other approach).

Consider the case of a variable (with my coefficient) being added to another expression that is then added - together with other expressions - to a constraint. It’s hard to find out - afterwards - what the contribution of my initial coefficient was (given the fact, that the same variable could occur in multiple of those expressions)… Also, “knowing the variable” it is not immediately clear in which constraints it ended up.

No, there are no multiplicative parameters in JuMP. As you’ve seen, we’ve had two attempts at introducing them (ParameterJuMP and ParametricOptInterface, but they both have issues with performance overhead).