# Question on behavior with comparisons in MonteCarloMeasurements

I’m playing with MonteCarloMeasurements but running up against limitations around comparisons.

julia> a = Particles(1000, Uniform(0.0,10.0))
Part1000(5.0 ± 2.89)

julia> b = Particles(1000, Uniform(0.0, 10.0))
Part1000(5.0 ± 2.89)

julia> a>b
ERROR: Comparison operators are not well defined for uncertain values and are currently turned off. Call unsafe_comparisons(true) to enable comparison operators for particles using the current reduction function mean. Change this function using set_comparison_function(f).
Stacktrace:
 _comparison_operator at C:\Users\klaffedk\.julia\packages\MonteCarloMeasurements\7ez8R\src\particles.jl:333 [inlined]
 <(::Particles{Float64,1000}, ::Particles{Float64,1000}) at C:\Users\klaffedk\.julia\packages\MonteCarloMeasurements\7ez8R\src\particles.jl:345
 >(::Particles{Float64,1000}, ::Particles{Float64,1000}) at .\operators.jl:294
 top-level scope at none:0

julia> min(a,b)
Part1000(3.282 ± 2.33)


Why does min(a,b) seem to work? Isn’t it ultimately doing comparisons?

I’m trying to figure out if MonteCarloMeasurements.jl can be used for a real world problem which includes saturation effects.

 P.S. I was able to rewrite my actual function with min and max and move forward, but I would still like to understand why.

The reason comparisons are not defined by default is that it’s not clear how you compare two probability distributions. When is one smaller than the other? I outline the limitations in this section of the docs

You can use the function unsafe_comparisons

help?> unsafe_comparisons
search: unsafe_comparisons

unsafe_comparisons(onoff=true; verbose=true)

Toggle the use of a comparison function without warning. By default mean is used to reduce particles to a floating point number
for comparisons. This function can be changed, example: set_comparison_function(median)


to compare particles using their mean or median etc., but this is not always a reasonable way of comparing distributions, why it is disabled by default. You may even define your own comparison in a way customized to your application, e.g., quantile(a,0.9) < quantile(b,0.1) etc. by calling set_comparison_function.

min works because it is defined as a primitive function through which each particle propagates one by one.

If you want <,>,<=, >= to behave like min, i.e., operate elementwise, you can run the following

foreach(register_primitive, [<=, >=, <, >])


Note that if a < b appears in a boolean context, e.g., if a < b you will get an error message.

2 Likes

Thank you! I just read your new paper (very nice!) and am coming up to speed on what’s happening under the covers.

1 Like

Thanks drop a line here or the issue tracker if the documentation is lacking somewhere!

1 Like

you can compare them (usefully) using fuzzy comparison:
given two continuous random variables A and B

A > B \overset{\cdot}{=} \left\langle \frac{\rm{sign}(a-b)+1}{2} \right\rangle_{a\sim A,\,b\sim B} \in [0,1]_{\mathbb{R}}

in actual code

using Statistics: mean

A=3.2 .+ randn(1000) .* 0.14
B=3.3 .+ randn(1000) .* 0.1

function fuzzyge(A,B)
mean(@. (sign(A-B)+1)/2) #not exactly the formula on the top, in this case it will work the same
end
@show fuzzyge(A,B)

fuzzyge(A, B) = 0.276


you can also return a binomial with p=...

The easiest thing to do would just be to return a particle of Bools, but the problem is that comparison is usually used within if statements, while loops or short-circuit logic operators, which will only accept a single Bool. The only way to make this work would be to do some source-to-source rewriting, like what Zygote does, but this would require a complete rewrite and make the API less convenient. You can still work around this with @bymap for functions that won’t work, but you’ll loose a lot of the performance advantages from betrer vectorizarion.

1 Like

As Simeon commented, comparing two distributions can be done in a lot of ways, but unless the comparison produces a Bool, it will fail for control flow. I opted for having them undefined by default, forcing the user to think about how they want the comparison to be done, rather than defaulting to some kind of comparison that turns out to be invalid for the user’s use case.

The source-rewriting strategy is indeed an interesting idea and I created a branch to experiment with this a couple of months ago. It turned out to be over my head at the time and I chose to wait until a more general solution appeared in ecosystem. Hydra.jl promises to be such a general solution, but it is not really mature yet. I had a discussion with Mike about it here, but it is so far an open issue.

4 Likes

Your package is very nice, and i plan on using it , indeed it would not help to define a behavior for comparisons that restricts the use case of the package.

Very nice work

1 Like