In particular, isapprox(a, b, atol=1e-2)
is checking whether norm(a - b) ≤ 1e-2
, where norm
is the default Euclidean norm. It sounds like what you want (or think you want) is the “infinity norm”. If you define norminf(x) = maximum(abs, x)
(or use norm(x, Inf)
from the LinearAlgebra
package), then you could employ the norm=norminf
keyword as @Tamas_Papp suggested.
However, more generally I would tend to recommend not using atol
in floating-point approximate-equality tests — a more common choice should be rtol
(relative tolerance). For example,
@test x ≈ y rtol=0.05
(equivalently, isapprox(x, y, rtol=0.05)
) tests whether x
and y
are within 5% of one another in the sense that norm(x-y) ≤ rtol*max(norm(x), norm(y))
. The reason to use a relative tolerance is that it is scale-invariant (“dimensionless”): multiplying both x
and y
by any factor (changing the “units”) will not require you to change the test. (Note in particular that this test will always pass when x = 1.03y
as in your example, for any finite x
and y
.)
Moreover, all of the rules of floating-point arithmetic are designed to preserve relative accuracy, not absolute accuracy. For example x + y
in floating point is guaranteed to give the exact result up to a relative tolerance of the machine precision ε
(eps(Float64) = 2.220446049250313e-16
in double precision), not including overflow. (If you perform multiple addition operations, however, these errors can accumulate.) So it is much easier to select a reasonable rtol
than atol
.
The isapprox
function has a default rtol
of √ε
, which means that it checks whether about half of the significant digits match. This is a good default for most floating-point unit tests — coarse enough that it won’t give false negatives due to accumulated roundoff errors, but fine enough to catch most bugs that give the wrong answer (as opposed to bugs that simply exacerbate roundoff). Because of that, most floating-point unit tests can simply do @test x ≈ y
.