Measurements.jl: how to zero values without eliminating the uncertainty / strange behavior with deepcopy

I have the following scenario: I want to propagate the uncertainty of the positions of particles in a simulation. At each time step, I compute the forces, a computation that starts with setting the forces to zero. The code reads like:

forces .= zero(T)

The problem I’m having is that zero(T) zeroes the uncertainty if T is a measurement:

julia> x = measurement(1.0,0.1)
1.0 ± 0.1

julia> x = zero(x)
0.0 ± 0.0

And thus the uncertainty is not propagated through the force calculation. Then I thought: ok, lets zero the forces by subtracting the same value, but subtraction of the same value zeroes the uncertainty as well:

julia> x = measurement(1.0,0.1)
1.0 ± 0.1

julia> x - x
0.0 ± 0.0

I would need actually something like this:

julia> x = measurement(1.0,0.1)
1.0 ± 0.1

julia> y = measurement(x.val,0.)
1.0 ± 0.0

julia> x - y
0.0 ± 0.1

But if my function is expected to receive more arbitrary types, like static vectors of measurements:

julia> x = SVector{2,Measurement{Float64}}(measurement(1.,0.1),measurement(1.,0.1))
2-element SVector{2, Measurement{Float64}} with indices SOneTo(2):
 1.0 ± 0.1
 1.0 ± 0.1

julia> zero(x)
2-element SVector{2, Measurement{Float64}} with indices SOneTo(2):
 0.0 ± 0.0
 0.0 ± 0.0

supporting this becomes cumbersome and way too specific.

Is there a proper way to avoid the zeroing of the uncertainty of a measurement that is general enough to replace zero(T) in such a function body?


As a side note, is this expected?

julia> x = measurement(1.0,0.1)
1.0 ± 0.1

julia> y = deepcopy(x)
1.0 ± 0.1

julia> x - y 
0.0 ± 0.0

# I was expecting this as a result:
julia> x = measurement(1.0,0.1)
1.0 ± 0.1

julia> y = measurement(1.0,0.1)
1.0 ± 0.1

julia> x - y
0.0 ± 0.14


1 Like

Something like this?

julia> tare(x) = x - Measurements.value(x)
tare (generic function with 1 method)

julia> tare.(x)
2-element SVector{2, Measurement{Float64}} with indices SOneTo(2):
 0.0 ± 0.1
 0.0 ± 0.1
2 Likes

Regarding this sidenote, yes that is expected. The way that Measurements.jl is able to distinguish between

julia> measurement(1.0, 0.1) - measurement(1.0, 0.1)
0.0 ± 0.14

and

julia> let x = measurement(1.0, 0.1)
           x - x
       end
0.0 ± 0.0

is by associating every Measurement with a tag that tells it about the lineage of that object so that it knows how to deal with correlations. When you deepcopy(x), you also copy the tag.

4 Likes

Good idea and nice name. As a small tweak, I’d suggest something like

tare(x::Measurement{T}) where {T} = zero(T) ± x.err
tare(x::Number) = zero(x)
tare(x::AbstractArray) = tare.(x)

which should be more efficient and automatically recurse into arrays.

5 Likes

Uhm, nice idea. Then if I define

tare(x::Measurement) = x - Measurements.value(x)
#and
tare(x) = zero(x)

I guess the code will still work for things that are not measurements.

Will try that :slight_smile:

edit: one problem is that I have zero(..) function in more than one place. Will Measurements still work (produce correct results) if I do a small type piracy and define:

Base.zero(x::Measurement{T}) where T = zero(T) ± x.err

?

I just tried this and it caused the packages tests to fail.

1 Like

I think having zero ignore the uncertainty part is just semantically incorrect. If you look at the docs for zero it says

help?> zero
search: zero zeros ZeroPivotException iszero set_zero_subnormals get_zero_subnormals count_zeros

  zero(x)
  zero(::Type)

  Get the additive identity element for the type of x (x can also specify the type itself).

  See also iszero, one, oneunit, oftype.

but what you want here is not the additive identity:

julia> x = 1 ± 0.1
1.0 ± 0.1

julia> x + tare(x)
1.0 ± 0.14

so it’s not respecting the interface that zero is supposed to respect.

1 Like

I want to propagate the uncertainty of the positions of particles in a simulation. At each time step, I compute the forces, a computation that starts with setting the forces to zero.

Is there a proper way to avoid the zeroing of the uncertainty of a measurement that is general enough to replace zero(T) in such a function body?

Not sure if this is the proper way for this particular function body, however, how about (Extended) Kalman Filtering (measurement update). Maybe this could have a use case in the described example.

1 Like

Maybe I can ask something more elementary. I am just writing an educational material, so it is not the case of trying very sophisticated stuff.

This image:

(how to properly embed an animated gif here?)

shows the trajectory of planets around the sun, and the bars are the error propagated by Measurements assuming an initial uncertainty on the planet positions.

I was initially expecting that the uncertainty would only increase, but it oscillates, and is zero when the forces are perpendicular to the velocity. That is why I think that the uncertainty is not correctly propagated here.

If anyone understands this more clearly on the conceptual level, I will also be greateful.

2 Likes

Measurements.jl uses linear error propagation and doesn’t handle correlated errors between measurements. For nonlinear dynamics, you’re better served by something like MonteCarloMeasurements.jl - see this pendulum example: Examples · MonteCarloMeasurements Documentation

3 Likes

That worked in some sense. Unfortunately, the simulation became prohibitively slow (and it slows down as the errors increase, apparently).

Any idea why is this allocating?

julia> using StaticArrays, Measurements

julia> struct Point2D{T} <: FieldVector{2,T}
           x::T
           y::T
       end

julia> x = Point2D(1.0±0.1,2.0±0.2)
2-element Point2D{Measurement{Float64}} with indices SOneTo(2):
 1.0 ± 0.1
 2.0 ± 0.2

julia> function tare(p::Point2D{Measurement{T}}) where T
          return Point2D{Measurement{T}}( zero(T) ± p.x.err, zero(T) ± p.y.err )
       end
tare (generic function with 1 method)

julia> tare(x)
2-element Point2D{Measurement{Float64}} with indices SOneTo(2):
 0.0 ± 0.1
 0.0 ± 0.2

julia> using BenchmarkTools

julia> @btime tare($x)
  20.113 ns (2 allocations: 96 bytes)
2-element Point2D{Measurement{Float64}} with indices SOneTo(2):
 0.0 ± 0.1
 0.0 ± 0.2


Actually it seems that just creating the measurements allocate:

julia> @btime Point2D(1.0±0.1,2.0±0.2)
  18.890 ns (2 allocations: 96 bytes)
2-element Point2D{Measurement{Float64}} with indices SOneTo(2):
 1.0 ± 0.1
 2.0 ± 0.2

julia> @btime 1.0±0.1
  7.162 ns (1 allocation: 48 bytes)
1.0 ± 0.1

strange?

edit: I checked now that the Measurement type carries not only the data, but other fields, including derivatives. Any chance that becomes fast enough for at least as simple particle simulation with a dozen of particles?

You may want to read about linear error propagation: Appendix · Measurements. In a nutshell, the error is proportional to the first partial derivative (hence the name “linear”, it’s a linear approximation) of the function with regard to the variable. If the derivate is zero (or very small, in the relevant scale)… the propagated uncertainty is zero (or very small) as well. Which is very likely what you have here.

Correctly propagating uncertainties all the way down is expensive:

3 Likes

Thanks. I think I have more of a conceptual problem here. Consider this simple example of a particle subject to a harmonic potential F = -kx, with k=1 here.

julia> using Measurements, Plots

julia> function trajectory(x,v,dt)
         traj = [ x ]
         for i in 1:1000
            force = -x
            x = x + v*dt + force*dt^2/2
            v = v + force*dt
            push!(traj, x)
         end
         return traj
       end
trajectory (generic function with 2 methods)

julia> traj = trajectory( 1. ± 0.1, 0. ± 0., 0.01);

julia> plot(layout=(1,2))
       
julia> plot!(eachindex(traj),[ x.val for x in traj ],label="x",subplot=1)

julia> plot!(eachindex(traj),[ x.err for x in traj ],label="error",subplot=2)

I get an oscillatory behavior for the error:

Does that make sense? Shouldn’t the errors accumulate on x, always increasing?

Edit:

I think the problem is in the fact that the copies of the variables are not independent:

What happens:

julia> x = 1. ± 0.1
1.0 ± 0.1

julia> force = -x
-1.0 ± 0.1

julia> x = x + force
0.0 ± 0.0

What I expected:

julia> x = 1. ± 0.1
1.0 ± 0.1

julia> force = -1.0 ± 0.1
-1.0 ± 0.1

julia> x = x + force
0.0 ± 0.14

Is there a way to deal with that?

My original questions, are therefore, nonsense. This is the cause of the behavior I’m observing in my simulation.

edit2: Maybe this is simply correct? Since the force depends on x the errors can cancel? Uhm… interesting :slight_smile: . Although that seems quite unnatural for the dynamics of any system starting from a position that has some uncertainty.

By the way, I do not see any difference with MonteCarloMeasurements in this case.

1 Like
julia> x = 0:0.1:10;

julia> t = x .± (x ./ 100);

julia> y = sin.(t);

julia> plot(y)

Here I simulated a simple case where the errors for the time vector t are increasing: the propagated errorors in sin.(t) are also increasing, but they are modulated by the derivative of the sine function. The idea is that where the function moves slowly, the derivative is small and the uncertainty about the position is small as well. This is simply how the linear error propagation theory works.

The only thing I find strange in your plot is that error looks shifted compared to what I’d expect: the derivative, and hence the error, should go to zero when x goes to ±1. But maybe I’m missing something

1 Like

To deal with what? Are you expecting that x - x is not zero, without doubts? If you’re ok with x - x not being zero, then you’d be ok with x + x not being equal to 2 * x and all sorts of meaningless results?

@giordano, thanks for the insights. Clearly everything is correct. It was, however, unnatural for me.

What I expected: I expected that the error would only increase. But, putting it more precisely, I expected that the error reflected somehow the extreme behaviors that the particle may have if the trajectory was initiated from the extremes of the uncertainty interval.

Yet, that is just what it is:

As you see, the trajectories cross each other, and cross each other exactly where the propagated error is zero (the “blue error” is the difference of the two trajectories here).

So, everything is perfect and makes sense now for me. Living and learning :-).

(that said, just ignore the performance problems there, not that performance does not suffer a bit, but it is not an issue here at all, the performance was not acceptable when I tried to “fix” the error propagation through the computation of the force, but that was simply wrong).

1 Like

I agree linear error propagation can be counterintuitive, but that’s how it works.

In particular, “seeing” the partial derivatives may not be easy. Especially if you get wrong what the partial derivative is computed with respect to, as I did above. Measurements.jl provides tools to inspect uncertainty components and derivatives (Measurements.jl basically implements its own layman autodiff engine). For example, with

julia> Measurements.uncertainty_components.(traj)
1001-element Vector{Dict{Tuple{Float64, Float64, UInt64}, Float64}}:
 Dict((1.0, 0.1, 0x0000000000000002) => 0.1)
 Dict((1.0, 0.1, 0x0000000000000002) => 0.099995)
 Dict((1.0, 0.1, 0x0000000000000002) => 0.09998000025)
 Dict((1.0, 0.1, 0x0000000000000002) => 0.0999550017499875)
 Dict((1.0, 0.1, 0x0000000000000002) => 0.09992000649987501)
 Dict((1.0, 0.1, 0x0000000000000002) => 0.09987501749935002)
 Dict((1.0, 0.1, 0x0000000000000002) => 0.09982003874762506)
 Dict((1.0, 0.1, 0x0000000000000002) => 0.09975507524308774)
 Dict((1.0, 0.1, 0x0000000000000002) => 0.09968013298285092)
 Dict((1.0, 0.1, 0x0000000000000002) => 0.09959521896220278)
 ⋮
 Dict((1.0, 0.1, 0x0000000000000002) => 0.0905619112857766)
 Dict((1.0, 0.1, 0x0000000000000002) => 0.09007938066876964)
 Dict((1.0, 0.1, 0x0000000000000002) => 0.08958781798716495)
 Dict((1.0, 0.1, 0x0000000000000002) => 0.08908727194562746)
 Dict((1.0, 0.1, 0x0000000000000002) => 0.08857779214959333)
 Dict((1.0, 0.1, 0x0000000000000002) => 0.08805942910035446)
 Dict((1.0, 0.1, 0x0000000000000002) => 0.08753223419005307)
 Dict((1.0, 0.1, 0x0000000000000002) => 0.08699625969658717)
 Dict((1.0, 0.1, 0x0000000000000002) => 0.08645155877842693)
 Dict((1.0, 0.1, 0x0000000000000002) => 0.08589818546934296)

you can see that all elements of traj have non-zero derivative only with respect to traj[1] (which is the input x to the function trajectory, the only quantity with non-zero uncertainty, you can verify it by checking the value of the tag of traj[1] with traj[1].tag). Then you can compute the derivative of traj with respect to traj[1] with

julia> Measurements.derivative.(traj, traj[1])
1001-element Vector{Float64}:
  1.0
  0.99995
  0.9998000025
  0.999550017499875
  0.99920006499875
  0.9987501749935
  0.9982003874762505
  0.9975507524308774
  0.9968013298285091
  0.9959521896220277
  ⋮
 -0.905619112857766
 -0.9007938066876964
 -0.8958781798716494
 -0.8908727194562746
 -0.8857779214959333
 -0.8805942910035445
 -0.8753223419005306
 -0.8699625969658716
 -0.8645155877842693
 -0.8589818546934295

and now this makes completely sense: the error is small when the derivative with respect to traj[1] is small, not the derivative with respect to time (the x-axis in your plot above) as I assumed above.

2 Likes

Nice.

The “extremes” case also provides an intuitive picture here. In the case of the planetary motion, what we are seeing is that the trajectories starting from slightly different initial points, since are all elliptical, cross each other at periodic times. When they cross, the uncertainty in the initial point does not introduce any uncertainty in the position, since all predicted positions are the same.

It makes complete sense, and is a result of the fact that these trajectories are periodic and not chaotic. For chaotic trajectories the uncertainties explode, because the possible trajectories diverge.

1 Like

I know it was suggested earlier, but have you checked out MonteCarloMeasurements.jl @lmiq? It’s really beautiful for this sort of thing.

julia> using MonteCarloMeasurements, Plots

julia> function trajectory(x,v,dt)
         traj = [ x ]
         for i in 1:1000
            force = -x
            x = x + v*dt + force*dt^2/2
            v = v + force*dt
            push!(traj, x)
         end
         return traj
       end
trajectory (generic function with 1 method)

julia> traj = trajectory( 1. ± 0.1, 0. ± 0., 0.01);

julia> plot(traj)

gives
traj

and

julia> plot(layout=(1,2))

julia> plot!(eachindex(traj),[ mean(x.particles) for x in traj ],label="x",subplot=1)

julia> plot!(eachindex(traj),[ std(x.particles) for x in traj ],label="error",subplot=2)

gives
traj2

3 Likes

I had not plot(traj) :slight_smile:

Beautiful, indeed. The error in this case appears to be identical to that of Measurements, though.