I am currently working on some tests. And fore some algorithms I compare the result to a known (analytical or via another algorithm) solution.
Now when trying to optimise for example speed of an algorithm, it might suffer a little bit from exactness. So then such a test might fail.
Is there a good way to know by how much it fails?
b are results of alogrhtms computing the same and that the tests are within a test set of maybe 50 tests.
julia> a = 0.0 0.0 julia> b = 1e-7 1.0e-7 julia> @test isapprox(a,b; atol=1e-7) Test Passed julia> @test isapprox(a,b; atol=1e-8) Test Failed at REPL:1 Expression: isapprox(a, b; atol = 1.0e-8) Evaluated: isapprox(0.0, 1.0e-7; atol = 1.0e-8) ERROR: There was an error during testing
In such a scenario It would be great to have something like a reason what the actual value was. A precise example is what we did on our package with
is_point Basic functions · ManifoldsBase.jl which can be set to throw a domain error that provides the reason why something is not a point.
Then for example
julia> using Manifolds, Test julia> @test is_point(Sphere(2), [1.0001, 0.0, 0.0], true) #this point is not of norm 1 Error During Test at REPL:1 Test threw exception Expression: is_point(Sphere(2), [1.0001, 0.0, 0.0], true) DomainError with 1.0001: The point [1.0001, 0.0, 0.0] does not lie on the Sphere(2, ℝ) since its norm is not 1. Stacktrace: [...] ERROR: There was an error during testing julia> @test is_point(Sphere(2), [1.0, 0.0, 0.0], true) #this point is of norm 1 so the test is fine Test Passed
where I do not necessarily need the (here omitted) Stacktrace but I find the information provided in the error message very helpful to debug tests – that is knowing why or how much it failed.
Would something like that be possible here as well? Probably only by implementing an own
edit: of course instead of an error also some information printed would also be fine with me, as long as one can get more information about why a test failed (that is e.g. by how much).