I’ve run into trouble with testing code that uses Channel
s. In particular, when using channels together with a producer / consumer function, I’ve found that it is difficult to pinpoint the part of the code that is incorrect.
Here is an example:
# example.jl
using Test
function correct_producer(c::Channel)
put!(c, "correct value")
end;
function incorrect_producer(c::Channel)
put!(c, "incorrect value")
end
function error_producer(c::Channel)
error("error message")
end;
const producer_list = [correct_producer, incorrect_producer, error_producer]
@testset "Testing $(prodfn)" for prodfn in producer_list
chn = Channel(prodfn)
@test take!(chn) == "correct value"
end
When running this piece of code from the command line using julia example.jl
or from the REPL using include("example.jl")
, we get the following errors:
julia> include("example.jl")
Test Summary: | Pass Total
Testing correct_producer | 1 1
Testing incorrect_producer: Test Failed at /Users/lappy486/tmp/example.jl:20
Expression: take!(chn) == "correct value"
Evaluated: "incorrect value" == "correct value"
Stacktrace:
[1] top-level scope at /Users/lappy486/tmp/example.jl:20
[2] top-level scope at /Users/julia/buildbot/worker/package_macos64/build/usr/share/julia/stdlib/v1.1/Test/src/Test.jl:1156
[3] include at ./boot.jl:326 [inlined]
[4] include_relative(::Module, ::String) at ./loading.jl:1038
[5] include(::Module, ::String) at ./sysimg.jl:29
[6] include(::String) at ./client.jl:403
[7] top-level scope at none:0
[8] eval(::Module, ::Any) at ./boot.jl:328
[9] eval_user_input(::Any, ::REPL.REPLBackend) at /Users/julia/buildbot/worker/package_macos64/build/usr/share/julia/stdlib/v1.1/REPL/src/REPL.jl:85
[10] macro expansion at /Users/julia/buildbot/worker/package_macos64/build/usr/share/julia/stdlib/v1.1/REPL/src/REPL.jl:117 [inlined]
[11] (::getfield(REPL, Symbol("##26#27")){REPL.REPLBackend})() at ./task.jl:259
Test Summary: | Fail Total
Testing incorrect_producer | 1 1
Test Summary: | Fail Total
Testing incorrect_producer | 1 1
ERROR: LoadError: Some tests did not pass: 0 passed, 1 failed, 0 errored, 0 broken.
in expression starting at /Users/lappy486/tmp/example.jl:18
In particular, note that the error messages supplied do not give a specific indication of where the incorrect values or errors were produced; there are no line numbers in the stacktrace that point to the incorrect_producer
or the error_producer
functions.
Of course, Channel
s are agnostic as to where their values come from; it would be too demanding to ask that julia keep track of where each value that is put!
into a channel comes from, so that during the testing phase we could figure out which producer function is to blame for the incorrect values that are in the channel. But this agnosticism means that it is difficult to test coroutines that make use of channels!
I wanted to solicit advice for test-driven development of code that makes use of Channels. Or if this problem is too fundamental, what are some workarounds or alternative design patterns that can be used to get more conveniently testable asynchronous behavior?