Consider the following example:
using Test
function fail(fail::Bool)
@test !fail
end
@testset "task_good" begin
task = @async fail(false)
fetch(task)
end
@testset "task_bad" begin
task = @async fail(true)
fetch(task)
end
While “task_bad” will correctly fail the test, “task_good” will succeed, with the test summary saying it did not run any tests. I suppose this is expected behaviour. Now this example:
function fail(fail::Bool)
@testset "task" begin
@test !fail
end
end
task1 = @async fail(false)
task2 = @async fail(true)
fetch(task1)
fetch(task2)
would print the results correctly, except it will (understandably) intermix your two independent reports wildly. Now in this simple example, one could synchronize the two tests and one would be fine. But what if I have two tasks that are not independent from each other? Consider something like this:
c = Channel{Nothing}(1)
function fail1(fail::Bool)
@testset "task" begin
@test true
put!(c, nothing)
@test !fail
end
end
function fail2(fail::Bool)
@testset "task" begin
@test true
take!(c)
@test !fail
end
end
task1 = @async fail1(false)
task2 = @async fail2(true)
fetch(task2)
fetch(task1)
Now I obviously cannot run the two testsets independently. Surely, I can go back to something like the first version:
c = Channel{Nothing}(1)
function fail1(fail::Bool)
@test true
put!(c, nothing)
@test !fail
end
function fail2(fail::Bool)
@test true
take!(c)
@test !fail
end
@testset "task_good" begin
task1 = @async fail1(false)
task2 = @async fail2(false)
fetch(task2)
fetch(task1)
end
@testset "task_bad" begin
task1 = @async fail1(true)
task2 = @async fail2(true)
fetch(task2)
fetch(task1)
end
but then, “task_good” will again succeed with “No tests”. So my question is: is this something I have to live with, or is there some other way to test non-independent-tasks scenarios?