I have a nonlinear JuMP model that is used in a hot loop: it is run 4000-8000 times, each run with a different set of parameters. The reason for this loop is that I want to perform global sensitivity analysis on the parameters → objective value mapping, after optimality is reached.

This happens as a result of user interaction in a web application, so latency is critical.

The issue I have is that there seems to be significant overhead when optimizing simple models. I’ve done some testing with the Rosenbrock function:

I’m using Juniper.jl here, since the real problem requires a MINLP solver. On my machine, the timings are:

1.187ns for a function evaluation

14.582ms for optimizing the same function

Since the function is simple and very fast to evaluate, I would assume that those 14ms are nearly 100% framework overhead.

Is there a way to reduce this latency? I have tried formulating the Rosenbrock nonlinear optimization by hand with MathOptInterface, but it does not seem to help.

If linear sensitivity analysis is sufficient (i.e. the derivative of the optimum with respect to the parameters), then you can do it in one optimization run + one linear solve, by backpropagating through the KKT equations. (e.g. see ImplicitDifferentiation.jl to help automate this.)

I think you should be realistic about solving 8,000 (!) MINLPs (!) in a latency-critical web application (!). Each of those things is difficult, and in combination…

Also note that Juniper is not a global MINLP solver, so you’re likely to encounter some weird local results with the sensitivity analysis.

What is the actual problem you are trying to solve, and why is sensitivity analysis required? Also, what are the run times for your actual application?

In that case, I will likely perform local sensitivity analysis in the fast path (interactive application), as suggested by @stevengj. It should still provide enough information to allow the users to quickly try out different scenarios. The full global sensitivity analysis can be a second step, once more detail is required.

The “latency-critical” was probably a wrong choice of words: waiting for a few seconds is fine, but not for a few minutes. Testing with the Rosenbrock function was a way to see if the initialization costs alone would make the setup impossible for those time constraints.
The real model itself is fast (<3ms evaluation) and has a low number of optimization variables (<20, still in development).

Thanks, you’re right. Still in prototyping phase and I had not realized that Juniper is a local solver.

The actual use case: we’re modelling an industrial plant (not yet built) and we want to investigate its behavior and commercial viability under uncertain operational conditions (ex: varying cost of energy).

For this, the program has a set of parameters, some of which are defined by probability distributions. It then generates N possible configurations through Latin hypercube sampling of the uncertain parameters. It runs the model for each of those configurations, assuming the plant is optimized for its operational conditions. Finally, it performs uncertainty and sensitivity analysis on the results.

There actually are 2 models: the full version, based on a flowsheet simulator, and the simplified version, which I’ve been prototyping in JuMP. I was trying to see if there was a way of performing the global sensitivity analysis on the simplified version in an amount of time that’s reasonable for an interactive application.