Mean of integers overflows - bug or expected behaviour?

My suggested solution does just that in a type-generic way: https://github.com/JuliaLang/Statistics.jl/issues/22#issuecomment-587119323

I don’t think the principle here is very difficult to understand: if the outputs are floating point, then you want a computation that is numerically stable and reasonably accurate in the floating-point sense, even if the inputs are integers. (Example: the norm of an integer array.) The current mean algorithm, which can give a large error for integer inputs regardless of the floating-point precision, is numerically unstable according to the formal definition.

2 Likes