Imposing deterministic results in parallel programs is very hard. It is not sloppy to be aware of the error model of your computer and programming to it. Of course it’s nice to avoid variability when you can but it’s a losing battle so you might as well just give up and embrace it. The way I think about it is that every floating point operation on a computer produces a random relative error of size eps(). Similarly I think of solvers as giving a random result whose residual is smaller than the tolerance I give it. It’s not completely accurate, but it’s a good mental model.