Iām guessing that what you mean by āsafeā is Would it give the same answer as it would in exact (infinite-precision) arithmetic? The answer, of course, is āit dependsā, but the most general answer is ānoā.
That is, suppose you are comparing two numbers x
and y
that are computed by two different floating-point algorithms, and you want a comparison function is_same(x,y)
that returns true
if you would have x==y
in infinite precision.
Suppose that you your algorithms are accurate to 8 significant digits. Then you could do isapprox(x, y, rtol=1e-8)
. Or you could do round(x, sigdigits=8) == round(y, sigdigits=8)
, which is almost equivalent but much slower (about 100Ć slower on my computer!).
Of course, to do this, you need to have a rough sense of the accuracy of your algorithms. If it is a single scalar operation like 0.1 + 0.2
, then it should be accurate to nearly machine precision, but for more complicated algorithms error analysis is much tricker. The default in isapprox
(the ā
operator) is to compare about half of the significant digits in the current precision, which is reasonable for many algorithms (losing more than half of the significant digits means you have a pretty inaccurate calculation), but is obviously not universally appropriate.
Naturally, be aware that such approximate comparisons may give false positives (returning true
for two values that are supposed to be distinct in infinite precision, but differ by a very small amount).
Your suggestion, round(x, digits=8) == round(y, digits=8)
, is roughly equivalent to (but vastly slower than) isapprox(x, y, atol=1e-8)
ā an absolute tolerance rather than a relative tolerance. Usually, a relative tolerance is more appropriate in floating-point calculations, because relative tolerances are scale invariant.
If you want a rigorous guarantee that two values might be the same, you can use Interval Arithmetic and implement might_be_same(x,y) = !isdisjoint(x,y)
. This might give you false positives, but will never give false negatives.