I spent some time yesterday looking at the error bounds computed by Measurements.jl
in the notebook. I don’t think there is anything wrong here, at least not in the code of the package It’s the linear error propagation method that, being proportional to the first derivative with respect to the uncertain values, gives sometimes unintuitive results – especially when the derivative becomes zero.
As a simpler example, consider the function f(t, \alpha) = \sin(\alpha t), where \alpha is a quantity with a measurement error. The plot below shows the mean value of f(t, \alpha) with error bars, with superimposed the lines for the upper and lower error computed analytically (f_{\alpha}(t, \alpha) = t \cos(\alpha t)).
This is the code to generate the plot:
using Measurements, Plots
f(t, α) = sin(α * t)
error(t, α) = abs(t * cos(Measurements.value(α) * t)) * Measurements.uncertainty(α)
t = -pi:0.05:pi
α = 2.3 ± 0.45
plot(t, f.(t, α); linewidth = 3, label = "Value", alpha = 0.8, fillalpha = 0.1, size = (600, 400))
plot!(t, @.(Measurements.value(f(t, α) + error(t, α))); linewidth = 3, label = "Upper error")
plot!(t, @.(Measurements.value(f(t, α) - error(t, α))); linewidth = 3, label = "Lower errror")
The “exposed” curve in the notebook shows a similar pattern, with the error becoming very small around the peak: probably the derivative with respect to the parameters \alpha, \beta and \gamma becomes very small there.