Why assuming floating-point rounding errors are random is bad