# Isapprox in tests and a more precise result / reason for failure

I am currently working on some tests. And fore some algorithms I compare the result to a known (analytical or via another algorithm) solution.

Now when trying to optimise for example speed of an algorithm, it might suffer a little bit from exactness. So then such a test might fail.
Is there a good way to know by how much it fails?

## Example

Imagine that a and b are results of alogrhtms computing the same and that the tests are within a test set of maybe 50 tests.

julia> a = 0.0
0.0

julia> b = 1e-7
1.0e-7

julia> @test isapprox(a,b; atol=1e-7)
Test Passed

julia> @test isapprox(a,b; atol=1e-8)
Test Failed at REPL[14]:1
Expression: isapprox(a, b; atol = 1.0e-8)
Evaluated: isapprox(0.0, 1.0e-7; atol = 1.0e-8)
ERROR: There was an error during testing


In such a scenario It would be great to have something like a reason what the actual value was. A precise example is what we did on our package with is_point Basic functions · ManifoldsBase.jl which can be set to throw a domain error that provides the reason why something is not a point.

Then for example

julia> using Manifolds, Test

julia> @test is_point(Sphere(2), [1.0001, 0.0, 0.0], true) #this point is not of norm 1
Error During Test at REPL[18]:1
Test threw exception
Expression: is_point(Sphere(2), [1.0001, 0.0, 0.0], true)
DomainError with 1.0001:
The point [1.0001, 0.0, 0.0] does not lie on the Sphere(2, ℝ) since its norm is not 1.
Stacktrace:
[...]

ERROR: There was an error during testing

julia> @test is_point(Sphere(2), [1.0, 0.0, 0.0], true) #this point is of norm 1 so the test is fine
Test Passed


where I do not necessarily need the (here omitted) Stacktrace but I find the information provided in the error message very helpful to debug tests – that is knowing why or how much it failed.

Would something like that be possible here as well? Probably only by implementing an own isapprox ?

edit: of course instead of an error also some information printed would also be fine with me, as long as one can get more information about why a test failed (that is e.g. by how much).

Just asking the obvious… this message already contains the values of a and b, so what is missing are the values \Vert a - b \Vert and \frac{\Vert a - b \Vert}{\max( \Vert a \Vert, \Vert b \Vert)} (for rtol) or are you looking for more?

Oh, yes, in this simple example one can see that, but both might be complex valued matrices of size 10x10, then it is not really easy to see.

1 Like

Mh, a very simple idea with with zero setup would be this, but probably one doesn’t want to write that much:

A = rand(10,10)
B = A .+ 1e-8

@test norm(A - B) < 1e-8
Test Failed at REPL[20]:1
Expression: norm(A - B) < 1.0e-8
Evaluated: 1.0000000022891005e-7 < 1.0e-8
ERROR: There was an error during testing


Maybe a simple macro would do the job

macro test_isapprox(A,B, atol, rtol = 0.0)
return :( @test norm($A -$B) < max( $atol,$rtol * max( norm($A), norm($B) ) ) )
end

@test_isapprox A B 1e-7
Test Failed at REPL[79]:2
Expression: norm(A - B) < max(1.0e-7, 0.0 * max(norm(A), norm(B)))
Evaluated:  1.0000000022891005e-7 < 1.0e-7
ERROR: There was an error during testing


(I’m sure with more commitment one can also make @test_isapprox A ≈ B atol = 1-8 work )

EDIT: I replaced the previous completely wrong example with a hopefully correct one…

1 Like

Ah, that looks like a nice first idea to solve this, yes, though then the original term with the approx is not showing in the expression. I will think about that.

@test a ≈ b has special support for ≈:

julia> @test 0.0 ≈ 1e-7 atol=1e-7
Test Passed

julia> @test 0.0 ≈ 1e-7 atol=1e-8
Test Failed at REPL[13]:1
Expression: ≈(0.0, 1.0e-7, atol = 1.0e-8)
Evaluated: 0.0 ≈ 1.0e-7 (atol=1.0e-8)
ERROR: There was an error during testing


It might be nice to update the @test macro to print out more information on an ≈ test failure, e.g. giving the absolute and/or relative error (for atol and/or rtol tests, respectively).

4 Likes