Optimizing over an integral that is computed with simulation

If you are already using JuMP, https://pulsipher.github.io/InfiniteOpt.jl/dev/ is a great extension of JuMP that can support formulations involving “expectations” wrt random variables using a range of algorithms for computing expectations including Monte Carlo (MC) and quadrature methods, e.g. see https://pulsipher.github.io/InfiniteOpt.jl/dev/guide/measure/. MC methods are only appropriate for low variance functions though because the variance in the expectation estimate (the sample mean) is

Var(\tilde{E}_{\xi} (f(x; \xi))) = Var(\sum_{i=1}^N f(x; \xi_i) / N) = Var(f(x; \xi)) / N

where N is the number of MC samples used to compute the expectation estimate. If the variance of the integrand f is small enough, MC can be used even in high dimensional cases because you can get the noise down to negligible values for deterministic optimization algorithms to work without noticing the problem is changing each time. You can also just fix the set of samples upfront and get a deterministic approximation of the original problem regardless of the variance of f, but in this case the true quality of the solution you get will be worse with fewer samples or higher variance of f.

1 Like