Performance of lp_rhs_perturbation_range

Hi,

I have a JuMP model (an LP, currently solved with GLPK) for which I query shadow prices and perturbation ranges of a number of (a few thousand in a large-ish problem) constraints.

Solving the LP takes a couple of seconds. Obtaining margin information takes hundreds. Does it have to take this long, and there anything that can be done about it?

I’ve had a quick dig with the profiler, and it seems like lp_rhs_perturbation_range is the main culprit. As far as I can tell, every call to lp_rhs_perturbation_range spends a lot of its time in _std_matrix and _std_basis_status, and, it would seem that given that the only arguments to these functions are the Model, that on each call, they’re generating the same matrix and basis status.

If this is correct, would it be feasible to either call lp_rhs_perturbation_range with a vector of right-hand sides, or, perhaps to cache the results of _std_matrix and _std_basis status somewhere in the model?

Thanks in advance!
Geoff

Geoff,

Yes, it looks like code is pretty slow. I’ve opened an issue:

What are you using the perturbation ranges for?

Thanks Oscar!

We use the shadow prices and perturbation ranges to give users messages like: “If you did some more of X, then you would get more of Y, up to N units of X”. If they’re well guided, users can do a surprising amount to relieve constraints.

The ranges are a nicety in the messages, so I can save time by leaving them out, or, if possible, provide them on demand later (for which I have a question about copying Models, but I’ll ask that in a separate thread).

Cheers,
Geoff

Interesting. We were envisaging that this would only be used for small pedagogical examples, so performance wasn’t a high priority. But this use case makes a lot of sense. How important is the “up to N units of X”? Can’t you just tell them the dual?

I don’t have time to look this week (it’s the last week of the quarter), but I may have time towards the end of next week. Alternately, it probably isn’t too much work if you can spare John or someone for the engineering time. I think the way to go is just to compute the ranges once for all constraints and return a dictionary mapping constraint indices to the range, rather than doing it one-by-one for each constraint.

You’re exactly right, the range is a “nice-to-have”, rather than essential, and I’ll experiment with providing margins on demand, rather than my current, rather crude “solve the model, and find all the margins at once and keep the user waiting” process.

That said, the ranges

  • give us some confidence that the margins are worthwhile, since apparent “high-value” margins stop quite suddenly at a nearby constraint
  • give users an estimate of what kind of change to try, if the are able to make a change.

The domain is dairy processing, and we have regional models, so the margins could, for example say “if you move more cream from region A to region B, you could…”. So it’s helpful to know both the value of moving the cream, and a rough estimate of quantity that might be worth moving.

1 Like