# Approximate equality

I don’t know what problem this would actually solve. People doing x ≈ 0 in practice are invariably like the original poster: they are expecting a tolerance much much greater than realmin.

That is, they are expecting ≈ to either magically know an absolute tolerance that is appropriate to how they computed x (which is impossible) or they are expecting ≈ to use an absolute tolerance assuming all quantities are of order unity (which is wrong). They just need to learn to pass an absolute tolerance for comparison of numbers to zero, or to simply check abs(x) ≤ atol.

Using realmin here makes isapprox harder to understand (now you’re asking the reader to know what realmin and subnormal numbers mean) for no benefit.

7 Likes

I agree entirely. Better to warn or error that it is invalid and to use norm(x) <tol or if they are going a - b\approx 0 that they should instead use a\approx b.

The is especially important to warn them about because a whole bunch of code would would be easily reorganized… People are yin the habit of writing code like

error = abs(a-b)
Converted = error \approx 0


I see that many smart people have posted on this site opinions on how isapprox ought to behave, and the opinions are mutually contradictory. This is strong evidence that this function should be dropped from Base. There is no such function in Matlab, and I have never heard anyone express a desire for it to be added to Matlab.

Let me ask the following: is there any author of a serious numerical package reading this discussion who uses isapprox in his/her code?

What, why? The relative tolerance is uncontroversial. The absolute tolerance choice is justified, though jarring at first. With @test and atol it’s a very nice convenience.

MATLAB packages typically aren’t doing a bunch of unit tests and CI to care about easily testing for accuracy. That’s largely due to the fact that CI testing a language which needs a license is… (how do you even do it?). Everyone has to write tests in Julia in order to register a package, and if you do floating point calculations, you will want isapprox.

Yes, in DiffEq’s tests because you cannot test equality of floating point calculations for anything serious.

8 Likes

I think is used in test cases more so than in the middle of algorithms? I really like having the function,… Just want it to fail or warn clearly or not even compile if it isnt doing something the user thinks it is.

I don’t know what counts as serious, but among the packages I have installed,

$grep isapprox ~/.julia/v0.6/**/*.jl | wc -l 1442  Most of these are tests. So I would argue that it is useful. 3 Likes That should be quite hard to do consistently, until LLVM implements an interface for mind reading. 4 Likes Actually, it’s there in Matlab, and it’s called verifyEqual (tolerances are optional parameters). Also intended for CI unit testing. Edit: as Chris points out, CI is awkward for Matlab. 2 Likes But isn’t the issue just the comparison when onr of the things are 0? Everything else is mathematically sound Yes, I have used it. Try out the matlab unit testing framework something for some good head smashing on your computer Just curious, were you able to set it up with Travis or AppVeyor? I’ve never seen that. I agree with @ChrisRackauckas in that the relative case is very sensible, both in the light of floating point precision and, as I argued before, it is consistent with the idea how physicists perceive what approximation is. We should be careful with adding something like the absolute case (comparison with zero). I would say that one should do isapprox(myvar+1, 1) (or replace 1 for something that is sensible in that case). My current point of view is not to change anything to the behaviour. Never bothered…would need to have a custom docker image for it the be especially useful for CI on github due to licensing. One of the MANY reasons to use jylia But why is 1 special as a reference scale? Better to just throw an assert or ugly warning Well, atol=0 is essentially the same as throwing an error if used in a test, since that will fail unless you have equality when near zero. You can then up the tolerance so the test passes if you want. Moreover: there is a discontinuity in the comparison as the minimum of the two numbers goes towards zero As I said above: this is not a good comparison for all cases, but nothing is. It is at least conceptually clear and has a simple mental model. Also, it is easy to customize: setting atol and rtol covers a lot of usual cases. I curious what you would replace it with. As it was pointed out above: just set atol if you want to compare near-0s. An update to the simple statistic above: $ grep isapprox ~/.julia/v0.6/**/src/*.jl | wc -l
137
\$ grep isapprox ~/.julia/v0.6/**/test/*.jl | wc -l
1284


so the majority are using it for tests, for which it is fine that isapprox(0, ϵ) are rejected. Then you just set atol.

I think the basic issue is that many people don’t have a good mental model for floating-point comparisons. (This seems to be one of the eternal verities of floating-point arithmetic, not specific to Julia.) Once the issue is explained, objections to the atol=0 default almost invariably melt away.

It’s important to have this in Base for testing, and is also important because it is so easy to get wrong (especially in the array case, where you really want to use some norm of the whole array as a reference magnitude in most applications, but people’s first attempts almost always involve elementwise comparisons).

8 Likes
16 Likes

I think the issue here even goes back to people (myself certainly included) that weren’t mentally thinking about what a \approx b means properly. Undergraduates and RAs who are often writing test cases are also unlikely to think through the nuances of relative comparisons and will probably just pull a tolerance out of thin air. On benefit of the current \approx is that is does the relative tolerance calculation for people properly.

From the model of this, it seems that it is,
a \approx b\quad \text{iff} \quad \frac{a}{b}\approx 1 \approx \frac{b}{a}

If this is indeed true, then the real issue is not even floating point math, but rather that this is undefined if either a=0 and b=0. We wouldn’t have a division by 0 silently ignored, so why would we here?

As for redefining this as
a \approx b\quad \text{iff} \quad \frac{a+ \mathbf{1}(a = 0 || b = 0)}{b + \mathbf{1}(a = 0 || b = 0)} \approx 1 there is an ugly discontinuity in whether the operation is true as a \to 0 or b\to 0.

More importantly, I am betting that 90% of the uses of a ≈ 0 comes to two cases (this is a dogmatic prior, no evidence!):

1. People get in the habit of checking scalars a - b ≈ 0 because they used to do abs(a - b) < tol. Here they could just have easily done a ≈ b with a small code rearrangement, which might even be cleaner in notation.
2. Testing whether two vectors are similar by checking ||A - B||_K \approx 0 for some K norm, and calling it with norm(A-B,Inf)≈0 or something like that. But the fallback for abstract arrays with isapprox basically does the ∞ norm. So in that case I think the warning should also be to call it with a ≈ b for vectors if the infinity norm is good enough. It will broadcast to the correct scalar version.

Here is a starting point for the isapprox that could be useful (where the array version doesn’t need to change)

function isapprox(x::Number, y::Number; rtol::Real=rtoldefault(x,y), atol::Real=0, nans::Bool=false)
if(x != y && (x==zero(Number) || y==zero(Number)))
warn("isapprox(x,y) cannot be called with x=0 or y=0.  If you are using isapprox(x - y, 0) or isapprox(norm(x-y),0) the rearrange code to be isapprox(x,y) for either scalars or vectors")
throw(DomainError())
else
return x == y || (isfinite(x) && isfinite(y) && abs(x-y) <= atol + rtol*max(abs(x), abs(y))) || (nans && isnan(x) && isnan(y))
end
end


Try it with

isapprox(1.0,1.000000001)
isapprox(1.0 - 1.000000001,0)
isapprox([1.0;0.1], [1.00000001;0.10000001])