How to improve runtime with measurements.jl?

Dear all,

I am using the measurements.jl package to calculate the error in my dataset. I have an array with voltage values which all have their own uncertainty. But when I want to calculate the mean of the array, it takes 27 seconds and allocates 805.25 MiB to do this. The array consists of around 47000 elements all of the same type. Does anyone know how to improve the performance of this operation? For an array of the same size with only Float64 elements the operation is almost done instantly. Thanks for helping me out.

1 Like

Welcome! :slight_smile: It’s hard to say what’s causing allocations or slowing down your code if you don’t share it with us - please share a MWE (minimal working example) showing the problem.

Please read: make it easier to help you

Also, have you followed most of the tips mentioned in the Performance Tips section of the manual?

1 Like
```julia
using Statistics, Measurements

VoltageArray = rand(50000) #Works really fast

VoltageArray = VoltageArray .± 0.1VoltageArray #Works really fast

mean(VoltageArray) #Takes a long time to run

This is a MWE of my problem. I hope I made it clear

1 Like

Divisions are expensive:

julia> @time (1 ± 0.1) / (2 ± .2)            
  0.000007 seconds (4 allocations: 192 bytes)
0.5 ± 0.071                                  

According to this issue on their repo this is known, you may be looking for weightedmean instead here:

julia> v = rand(50_000);                       
                                               
julia> v = v .± 0.1 .* v;                      
                                               
julia> @time weightedmean(v)                   
  0.001619 seconds (6 allocations: 781.438 KiB)
0.0002565 ± 2.2e-6                             

Though you should note:

NOTA BENE: correlation is not taken into account.

Accumulation of error bars is non-trivial, else this would already be much faster, I assume.

1 Like

Sure, divisions, but look at this:

jl> v = rand(100);

jl> vm = v .± 0.1 .* v;

jl> @btime sum($v)
  9.409 ns (0 allocations: 0 bytes)
54.45849425120919

jl> @btime sum($vm)
  14.700 μs (102 allocations: 6.44 KiB)
54.46 ± 0.62

That’s a 1500x slowdown(!) Is that really expected?

1 Like

Not my words :man_shrugging:

Maybe something has changed since then, @giordano could chime in, but I doubt anything grand has changed here. Keeping track of correlations is hard.

1 Like

Yes, unfortunately that’s exepcted, I made the example of the mean in the issue linked above. As @Sukera pointed out, tracking correlation is hard. It’s pretty easy to write a package to propagate uncertainties super quickly ignoring correlations, this is what Measurements.jl did until v0.02, but that’s also incredibly dumb and useless: almost no identies would hold, for example x + x and 2 * x would give you different results.

Regarding the mean in particular, note that most of the time users want to compute the weighted mean, for which Measurements.jl provides a specific function, which should have much more reasonable performance. Note that it ignores correlation, as warned in the docstring, because you’d usually apply to a sample of independent measurements anyways.

To be clear, Measurements.jl is slow because of an algorithmic limitation: it uses an O(n^2) algorithm to propagate uncertainties, you can understand why mean/sum are particularly bad, and get worse and worse as the size of the vector increases. There may be clever ways to reduce the complexity of the algorithm, but I never had the time to look at it. I believe the Python package uncertainties now has an algorithm which is O(n), or anyways better than O(n^2), but until a few years ago it was using basically the same algorithm as Measurements.jl (well, historically it’s the other way around). If anyone is willing to help, I’d be glad to hear from them.

4 Likes

Okay, I had no idea, and thought is was the same as interval arithmetic.

No, I believe interval arithmetic doesn’t track correlations at all, which is probably fine for what it’s applied to

1 Like

Bummer, that project has a weird custom license :sweat: It may be a BSD license according to git history, but I’m not a copyright lawyer…

I think it is a 3 clause BSD license The 3-Clause BSD License | Open Source Initiative. What that means for a port I don’t know, but it’s that one,

Then we may be in luck - that license only places restrictions on redistributions in binary or source form (a port is as far as I know neither), as well as not endorsing our port with their name (i.e. we should be fine if we write something like “port inspired by”…).