JuMP: Sensitivity of Objective to Constraints

Is there a recommended way to perform sensitivity analysis of the optimization target to different constraints?

What I mean by that is a way to determine which constraints are most limiting to the objective function. For example, in this MWE I’d like to be able to ascertain which constraint could be relaxed to achieve the most benefit to the optimization target:


using JuMP
using GLPK

function optimizer(limitall,limitone)
	model = Model(GLPK.Optimizer)
	@variable(model, x[1:5])
	@constraint(model, x .<= limitall)
	@constraint(model, x[1] .<= limitone)
	vals = [1:5...]

	@objective(model, Max, sum(x .* vals) )
	optimize!(model)
	return model
end

m = optimizer(10,5)

objective_value(m)          # 145
value.(m[:x])               # [5.0, 10.0, 10.0, 10.0, 10.0]

In this case, if limitall went from 10 to 11 then my objective would improve by 4, while limitone going from 5 to 6 would improve it by 1.

In practice, I have a model like this that has 100+ constraints and I’m trying to determine which are most binding.

I have seen the function JuMP.lp_sensitivity_report(m) - what it seems to tell me is a different question: how much the constraints can change without affecting the objective value.

I have also tried wrapping the optimizer in a function f that takes the two limits and returns the objective value. Zygote did not differentiate the function with the error:

Compiling Tuple{typeof(MathOptInterface.add_variable), MathOptInterface.Utilities.CachingOptimizer{MathOptInterface.Bridges.LazyBridgeOptimizer{GLPK.Optimizer}, MathOptInterface.Utilities.UniversalFallback{MathOptInterface.Utilities.Model{Float64}}}}: try/catch is not supported.

You want to look at the Lagrange multipliers (aka dual variables): Solutions · JuMP

Thank you for the pointer!

Can you help me understand the output here? I updated the constraints to include identifiers(?):

@constraint(model,c_all, x .<= limitall)
@constraint(model,c_one, x[1] .<= limitone)

and got the dual solution:

dual.(m[:c_all])  # [-0.0, -2.0, -3.0, -4.0, -5.0]
dual.(m[:c_one])  # -1.0

I tried playing around with the model but it hasn’t clicked what this is telling me. E.g. if I change the c_one constraint to @constraint(model,c_one, x[1] .<= limitone/2) then dual.(m[:c_one]) remains -1.0.

Is there a recommended way to perform sensitivity analysis of the optimization target to different constraints?

You should look up “duality” in any textbook in linear programming. Another terms to search is “shadow price”. JuMP’s shadow_price(constraint) will return the change in the objective if you relaxed the right-hand side of the constraint. So you just need to pick the constraint with the largest shadow price.

Docs:

3 Likes