[ANN] MonteCarloMeasurements.jl

Thanks for the tip! I created a WIP PR, WIP since it’s not quite working yet… :confused:

Some updates:

We now support

Further, the documentation in the readme is extended quite a lot

New examples

Robust probabilistic optimization

Here, we use MonteCarloMeasurements to perform robust optimization. With robust and probabilistic, we mean that we place some kind of bound on a quantile of an uncertain value, or otherwise make use of the probability distribution of some value that depend on the optimized parameters.

The application we consider is optimization of a PID controller. Normally, we are interested in controller performance and robustness against uncertainty. The robustness is often introduced by placing an upper bound on the, so called, sensitivity function. When the system to be controlled is parameterized by Particles, we can penalize both variance in the performance measure as well as the 90:th quantile of the maximum of the sensitivity function. This example illustrates how easy it is to incorporate probabilistic constrains or cost functions in an optimization problem using Particles.

Autodiff and Robust optimization

Another example using MonteCarloMeasurements to perform robust optimization, this time with automatic differentiation. We use Optim.jl to solve a linear program with probabilistic constraints using 4 different methods, two gradient free, one first-order and one second-order method.

Control systems

This example shows how to simulate control systems (using ControlSystems.jl
) with uncertain parameters. We calculate and display Bode diagrams, Nyquist diagrams and time-domain responses. We also illustrate how the package ControlSystemIdentification.jl interacts with MonteCarloMeasurements to facilitate the creation and analysis of uncertain systems.

We also perform some limited benchmarks.

Lantin Hypercube Sampling

We show how to initialize particles with LHS and how to make sure the sample gets the desired moments. We also visualize the statistics of the sample.

How MC uncertainty propagation works

We produce the first figure in this readme and explain in visual detail how different forms of uncertainty propagation propagates a probability distribution through a nonlinear function.

New benchmark

The benchmark results below come from examples/controlsystems.jl The benchmark consists of calculating the Bode curves for a linear system with uncertain parameters

w  = exp10.(LinRange(-1,1,200)) # Frequency vector
p  = 1 ± 0.1
ζ  = 0.3 ± 0.1
ω  = 1 ± 0.1
G  = tf([p*ω], [1, 2ζ*ω, ω^2])
t1 = @belapsed bode($G,$w)
   ⋮
Benchmark Results
Time with 500 particles 11.5937ms
Time with regular floating point 0.3456ms
Time with Measurements 0.5794ms
Time with 100 static particles 1.8212ms
Time with static sigmapoints 0.7252ms
500×floating point time 172.7780ms
Speedup particles vs. Manual 14.9x
Slowdown particles vs. Measurements 20.0x
Slowdown static vs. Measurements 3.1x
Slowdown sigma vs. Measurements 1.3x
15 Likes

Very cool package! Are there any plans to implement multi-threading as an alternative to pmap?

Thanks!

I have been considering it, but I think most functions would not benefit very much from naive threading. The speedup you get by using SIMD on almost every internal operation together with not redoing computation on deterministic input arguments is quite hard to beat. Possibly a blocking strategy where the computations are divided into exactly nthreads blocks could make use of both threads and SIMD. I’ll think about it :slight_smile:

Instead of just doing multithreading, can we try and get this finished up for automated GPU goodies? :slight_smile: WIP: Automatic GPU-parallelism of swarms by ChrisRackauckas · Pull Request #6 · baggepinnen/MonteCarloMeasurements.jl · GitHub . Is there another place in the code which is calling maps?

Yeah, I can’t really help out with that, can never become friends with the gpu drivers… :confused:
I would be nice to have though.

map is used in the macro @bymap, which acts as a fallback for a top-level function f that fails when called with particles f(p1, p2, ...).

@bymap f(p1, p2, ...) just transforms the call to a map over f and assembles the result into a Particles in the end.

I don’t think @bymap/@bypmap needs GPU support though.

New version of Julia, new version of MonteCarloMeasurements :slight_smile:

Since I last updated this post, there has been quite a lot of usability changes, some highlights

  • Documentation has been improved and moved from the README to a proper website. Home · MonteCarloMeasurements Documentation
  • We have a number of new contributers, thanks to all of you :pray:
  • Plotting is now more robust
  • Discrete distributions (i.e., distributions where a sample is an Int or a Bool etc.) work much better, thanks to @simeonschaub
  • Particles and missing should now play nicely together, thanks @simeonschaub.
  • Performance has increased, in the updated benchmark below by about 30% (not counting improvements performance improvements to julia), details here.
  • Conversion from particles and arrays of particles to regular arrays is much smoother, thanks to @sethaxen
  • MonteCarloMeasurements have been given support in the visualization library ArviZ.jl, thanks to @sethaxen
  • @cscherrer has written a nice blog post on how to make use of Particles for variational inference. MCM has also been incorporated into Soss.jl
  • The slack channel #monte-carlo-measurements have been created for discussions related to the package.
  • Both Zygote and ForwardDiff can differentiate stuff that contains Particles, robust optimization example.
  • Better handling of NamedTuple with particle fields.

Benchmark

The new results below are obtained with a faster computer and newer version of Julia. The relative timings in the bottom are however indicative of some nice performance improvements:

Benchmark Old Results New Results
Time with 500 particles 11.5937ms 1.3632ms
Time with regular floating point 0.3456ms 0.0821ms
Time with Measurements.jl 0.5794ms 0.1132ms
Time with 100 static particles 1.8212ms 0.2375ms
Time with static sigmapoints 0.7252ms 0.0991ms
500×floating point time 172.7780ms 41.0530ms
Speedup particles vs. Manual 14.9x 30.1x
Slowdown particles vs. Measurements.jl 20.0x 12.0x
Slowdown static vs. Measurements.jl 3.1x 2.1x
Slowdown sigma vs. Measurements.jl 1.3x 0.9x
13 Likes

I’d like to add support for something like this to BAT.jl, and I was wondering: what’s the relationship (if any) between MonteCarloMeasurements.jl and @carstenbauer’s MonteCarloObservable.jl](https://github.com/crstnbr/MonteCarloObservable.jl)? If they’re similar approaches to the problem, I may try to add support for both.

Based on the Readme of MonteCarloObservable it seems we are solving different problems.

I’m not sure how you envision support from MCM, it seems BAT is somewhat similar to Soss with some functionality overlapping with Arviz as well, so the support present in those packages may perhaos serve as inspiration for your package?

Ah, yes - I took a closer look at MonteCarloObservable, I think it goes in a different direction.

BAT is somewhat similar to Soss with some functionality overlapping with Arviz as well, so the support present in those packages may perhaos serve as inspiration for your package

Thanks, I’ll definitely have a look! BAT different from Soss in that it’s about use cases where the user has a custom likelihood distribution, it doesn’t provide a probabilistic programming DSL, but it also doesn’t require a forward model. MonteCarloMeasurements should fit in very well, though.

I also want to see if I can mesh MonteCarloMeasurements with ValueShapes.jl.

1 Like

I’m looking forward to see what you have in mind :slightly_smiling_face:

Ah, what I mean is, I want to make BAT support MCM - as in, being able to convert the BAT MCMC output to MCM Particles.

That should be quite straightforward I think, this is what Soss does :smiley:

There is now a paper outlining MonteCarloMeasurements

There is also a new branch which has support for GPU particles that are effective for sample sizes north of 100k and currently show a speedup of approximately 5-7 times.

19 Likes