How to add description to failed unit tests?

In our numerics package we rely on Julia’s Test for unit testing. Sometimes we have multiple values returned from a function which we would like to check separately (e.g., L2 errors for different variables). Conceptually they all belong to the same test set, but we would still like to be able to see which of the values failed. Is there a way to add a descriptive text to tests that gets displayed next to the failed test result that can achieve this?

Example
This is the current output for a failed test:

julia> using Test

julia> l2 = [0.1]; l2_measured = [0.11];

julia> @test isapprox(l2[1], l2_measured[1])
Test Failed at REPL[4]:1
  Expression: isapprox(l2[1], l2_measured[1])
   Evaluated: isapprox(0.1, 0.11)
ERROR: There was an error during testing

What we would like to see instead is something like this:

julia> @test isapprox(l2[1], l2_measured[1]) "L2 error: density"
Test Failed at REPL[4]:1
  Expression: isapprox(l2[1], l2_measured[1])
   Evaluated: isapprox(0.1, 0.11)
 Description: L2 error: density
ERROR: There was an error during testing

It seems that something like this is not officially supported yet, but maybe someone knows a good workaround? I already tried highjacking the @test macro by chaining expressions (e.g., !isempty("L2 error: density") && isapprox(...) but that results in the expression/evaluated info not being shown anymore.

Edit: Using a @testset for each sub-test is not a suitable approach, as described below.

Perhaps put it into a @testset

@testset "L2 density"
   @test isapprox(l2[1], l2_measured[1])
end

This would result in someting like
"L2 density" Test Failed at ...

I thought about this too (should’ve put it in the question above), but it does not fit the bill:

  • it adds a lot of visual noise even in case of success
  • it defies the purpose of a separation into tests and test sets (by adding hundreds of @testsets with just a single test in it), especially since the test is really “check for L2 error” and the info on which variable fails is only to help users with tracking down errors faster

Thus I am still looking for alternatives (and/or community input whether this is something to write up as a feature request).

I was initially a little bit annoyed by this, but I eventually turned and instead stopped using failure strings in other languages and started using more descriptive names.

Anyways, fwiw here is a low effort test macro with a failure string. You can probably whip up something better with a little more effort, maybe by wrapping @assert instead.

julia> macro testmsg(ex, str)
       quote
          try
             @test $ex
          catch e
             println($str)
          end
       end
       end
@testmsg (macro with 1 method)

julia> @testmsg 1==1 "Fail"
Test Passed

julia> @testmsg 1==2 "Fail"
Test Failed at none:4
  Expression: 1 == 2
   Evaluated: 1 == 2
Fail

What do you mean exactly by this? I don’t know how I would be able to use a descriptive name if most tests consist of floating point comparisons, as @test isapprox(...) is not very descriptive in itself, at least if you iterate over arrays.

One issue I see with your proposed snippet is that it probably disables the rest of the testing infrastructure, as you add additional output & stop throwing an exception on test failure.

I didn’t mean to imply that I’m certain you’ll find it satisfactory and I’m sorry if it came off that way. There is obviously no one size fits all for how to apply “started using more descriptive names”.

Just to make it clear there is no magic, here is one example with your MWE:

julia> l2_error_expected = [0.1];

julia> l2_error_measured = [0.11];

julia> @test l2_error_expected[1] ≈ l2_error_measured[1]
Test Failed at none:1
  Expression: l2_error_expected[1] ≈ l2_error_measured[1]
   Evaluated: 0.1 ≈ 0.11

To me this is pretty readble and has the advantage that it will look familiar to anyone who is used to the julia framework

One nifty thing that at least I discovered quite late which in some cases can mitigate the “too many testsets” issue is that it is possible to loop in testsets, generating one set for each iterable.

julia> @testset "Dummy" begin
       @testset "test for $i" for i in 1:5
       @test min(3,i) == i
       end
       end

Test Summary: | Pass  Fail  Total
Dummy         |    3     2      5
  test for 1  |    1            1
  test for 2  |    1            1
  test for 3  |    1            1
  test for 4  |          1      1
  test for 5  |          1      1

# Outer testset supress visual noise if successful
julia> @testset "Dummy" begin
       @testset "test for $i" for i in 1:3
       @test min(3,i) == i
       end
       end
Test Summary: | Pass  Total
Dummy         |    3      3

Matter of taste ofc, but I find this visually appealing. May or may not apply in your case though.

w.r.t the macro, yeah, you’d want to rethrow the caught error. Not sure if this is enough to not muck up the testframework as I haven’t tested it.

3 Likes

Thank you for sharing your personal approach to mitigating the “too many testsets” issue. It is very usable in cases where you can re-arrange your test setup as a for loop. While I still do not think that this is worthy of being the “canonical” approach (I think Julia’s unit testing framework could do better), I’ll use this approach for now.

1 Like