MOI: Re-evaluate custom bridge & `direct_model(...)` bridging

As far as I understand, when I add a custom bridge (struct MyCustomBridge{T} <: MOI.Bridges.Constraint.AbstractBridge end) to my model using

model = JuMP.Model(HiGHS.Optimizer)

MOI.Bridges.add_bridge(JuMP.backend(model).optimizer, MyCustomBridge)

the respective MOI.Bridges.Constraint.bridge_constraint(...) function is called as soon as I call optimize!(model) for the first time. I noticed that it does not automatically re-evaluate the bridging if the overall model is flagged as dirty and then re-optimized. However, executing

JuMP.set_normalized_rhs(constr, JuMP.normalized_rhs(constr))

properly triggers a “re-bridging” of constr.

Is there a more “clean” (as in “intended”) way to trigger this, for a given constraint (but not for every constraint)?

Additionally, I tried applying that to a model created with direct_model(...) by utilizing the LazyBridgeOptimizer:

model = direct_model(HiGHS.Optimizer())
optimizer = MOI.Bridges.LazyBridgeOptimizer(backend(model))

MOI.Bridges.add_bridge(optimizer, MyCustomBridge)

However, on creating a constraint that is not directly supported by the solver, it triggers the Constraints of type ... are not supported by the solver. error (the bridge is the same as in the non-direct model, where it works).

Can you provide a reproducible example? I don’t understand what is going on.

Your code is also a little off. It should probably be:

model = JuMP.Model(HiGHS.Optimizer)
JuMP.add_bridge(model, MyCustomBridge)

# or

model = direct_model(
    MOI.Bridges.full_bridge_optimizer(HiGHS.Optimizer(), Float64),
JuMP.add_bridge(model, MyCustomBridge)

Thanks for the hint, besides all the digging into MOI, I somehow missed that JuMP has the add_bridge too… :slight_smile:

MWE below, short preface: I’ve got a quadratic constraint, that I want to reformulate in a way that HiGHS can handle it; the MWE just deletes the constraint.

The bridge looks like (based on your answer here):

struct MyCustomBridge{T} <: MOI.Bridges.Constraint.AbstractBridge end

function MOI.Bridges.Constraint.concrete_bridge_type(
    return MyCustomBridge{Float64}

function MOI.supports_constraint(
	return true

function MOI.Bridges.added_constrained_variable_types(::Type{<:MyCustomBridge}) 
	return Tuple{DataType}[]

function MOI.Bridges.added_constraint_types(::Type{<:MyCustomBridge})
	return []

function MOI.Bridges.Constraint.bridge_constraint(
    @info "Bridging now"
    return MyCustomBridge{Float64}()

The model building looks like:

model = JuMP.Model(HiGHS.Optimizer)
JuMP.add_bridge(model, MyCustomBridge)

x = JuMP.@variable(model, [1:3], lower_bound=0)
c = JuMP.@constraint(model, x[1] + x[2]^2 - 3*x[3] >= 0)

JuMP.@objective(model, Min, x[1] + 0)

JuMP.optimize!(model)           # calling the bridge here

JuMP.fix(x[2], 5; force=true)   # fixing this variable would make the constraint viable for HiGHS

JuMP.optimize!(model)           # not calling the bride here - constraint is still deleted

JuMP.set_normalized_rhs(c, JuMP.normalized_rhs(c))

JuMP.optimize!(model)           # calling the bridge here

The important part is JuMP.fix(x[2], 5; force=true), where the user may change the constraint (by fixing the variable that complicates the constraint into a quadratic one) - essentially reducing the constraint to an affine one. Now the next optimize!(...) does not call the bridge - as far as I understand because it actually does not “know of the change” (since fix() just creates an additional MOI Equal).

If however I force an “update” of the respective constraint, e.g. using JuMP.set_normalized_rhs(c, JuMP.normalized_rhs(c)), then the following optimize!(...) will invoke the bridge - allowing me to react to the change that the user made.

While it makes sense (at least to me :sweat_smile:), that this happens, I’m looking for a “less hacky” way to trigger that update.

I tried that (using the same MWE as above), which results - upon calling the @constraint(...) in:

Bridge of type `MyCustomBridge{Float64}` does not support accessing the attribute `MathOptInterface.ConstraintFunction()`. [...]

The bridges are assumed to be independent. So calling fix cannot modify another constraint.

The bridge updates when you call set_normalized_rhs because your bridge does not support MOI.modify, so JuMP deletes the constraint, modifies it, and then re-adds it, triggering the bridge again.

If you’re looking to write a bridge that depends on variable bounds, you’ll need final_touch.

For examples, see:

This will ensure you get to see any updated bounds before each call to optimize!.

I still need to write documentation for this though: Explain bridges in the documentation (better) · Issue #1504 · jump-dev/MathOptInterface.jl · GitHub

And for the last part, using direct_model includes a bunch of additional assumptions on the methods you need to implement. You’re missing MOI.get for ConstraintFunction and ConstraintSet.

1 Like

Thanks, amazing input (as always!). And by the way: the bridges are a really fascinating piece to dig into as an outsider :smiley:

the bridges are a really fascinating piece to dig into as an outsider

Yes, they work surprisingly well. Although they’re pretty complicated to work on.

There’s some testing infrastructure that helps a lot to check you’re getting things correct:

Ah. I’ve just put your two questions together: JuMP: Multiplicative parameters for LP

Yes, that’s an elegant way to go about it: for solvers not supporting quadratics, add a bridge from quadratic to linear that checks for fixed variables in final_touch :smile:

Open an issue or a PR at MathOptInterface.jl if you get stuck and I can point you in the right direction.

@blegat and @joaquimg will be interested/impressed.

There’s some testing infrastructure that helps a lot to check you’re getting things correct […]

Thanks for pointing that out!

Yeah, that was the initial thought - and after reading into it, I noticed that I could do so much more using bridges (e.g. automatic/built-in piecewise approximations that are right now… let’s say “non-elegant”… in my implementation) :smiley:

I just didn’t want to jump the gun before trying it out - could be way worse than any “modify the affine terms of some expression”-approach. If I can get a working prototype up and running, I’ll make sure to discuss it further on github!


Sounds good. Feel free to open a PR even if you have only half a working prototype.