Machine Precision ... and 0 in Julia


I have a question about whether there is a way around the following problem. If I take different linear combinations of the same vector I should get the same vector as a result. However, applying the linear combination adds some almost-0 error which then implies that the two are not the same. Specifically, in the code below v1 and v2 should be exactly the same but they are no exactly the same.

x = rand(100);
v1 = 0.1*x + (1-0.1)*x;
v2 = 0.8*x + (1-0.8)*x;
println(maximum(abs.(v1 .- v2)))

it’s expected, it’s Float number precision, you should use \approx


For a nice write-up of this, see PSA: floating-point arithmetic


0.1, 0.2, 0.8, and 0.9 cannot be expressed exactly in floating point. If you change to

x = rand(100);
v1 = 0.25*x + (1-0.25)*x;
v2 = 0.5*x + (1-0.5)*x;
println(maximum(abs.(v1 .- v2)))

you’ll get what you expect because 0.25 and 0.5 can be expressed exactly.

This will be true in any language using the floating point hardware in your computer.

You can see the repeating form of the mantissa by using bitstring(0.1) or bitstring(0.25).