Best practices for test-driven development with channels

I’ve run into trouble with testing code that uses Channels. In particular, when using channels together with a producer / consumer function, I’ve found that it is difficult to pinpoint the part of the code that is incorrect.
Here is an example:

# example.jl
using Test

function correct_producer(c::Channel)
    put!(c, "correct value")

function incorrect_producer(c::Channel)
    put!(c, "incorrect value")

function error_producer(c::Channel)
    error("error message")

const producer_list = [correct_producer, incorrect_producer, error_producer]

@testset "Testing $(prodfn)" for prodfn in producer_list
    chn = Channel(prodfn)
    @test take!(chn) == "correct value"

When running this piece of code from the command line using julia example.jl or from the REPL using include("example.jl"), we get the following errors:

julia> include("example.jl")
Test Summary:            | Pass  Total
Testing correct_producer |    1      1
Testing incorrect_producer: Test Failed at /Users/lappy486/tmp/example.jl:20
  Expression: take!(chn) == "correct value"
   Evaluated: "incorrect value" == "correct value"
 [1] top-level scope at /Users/lappy486/tmp/example.jl:20
 [2] top-level scope at /Users/julia/buildbot/worker/package_macos64/build/usr/share/julia/stdlib/v1.1/Test/src/Test.jl:1156
 [3] include at ./boot.jl:326 [inlined]
 [4] include_relative(::Module, ::String) at ./loading.jl:1038
 [5] include(::Module, ::String) at ./sysimg.jl:29
 [6] include(::String) at ./client.jl:403
 [7] top-level scope at none:0
 [8] eval(::Module, ::Any) at ./boot.jl:328
 [9] eval_user_input(::Any, ::REPL.REPLBackend) at /Users/julia/buildbot/worker/package_macos64/build/usr/share/julia/stdlib/v1.1/REPL/src/REPL.jl:85
 [10] macro expansion at /Users/julia/buildbot/worker/package_macos64/build/usr/share/julia/stdlib/v1.1/REPL/src/REPL.jl:117 [inlined]
 [11] (::getfield(REPL, Symbol("##26#27")){REPL.REPLBackend})() at ./task.jl:259
Test Summary:              | Fail  Total
Testing incorrect_producer |    1      1
Test Summary:              | Fail  Total
Testing incorrect_producer |    1      1
ERROR: LoadError: Some tests did not pass: 0 passed, 1 failed, 0 errored, 0 broken.
in expression starting at /Users/lappy486/tmp/example.jl:18

In particular, note that the error messages supplied do not give a specific indication of where the incorrect values or errors were produced; there are no line numbers in the stacktrace that point to the incorrect_producer or the error_producer functions.

Of course, Channels are agnostic as to where their values come from; it would be too demanding to ask that julia keep track of where each value that is put! into a channel comes from, so that during the testing phase we could figure out which producer function is to blame for the incorrect values that are in the channel. But this agnosticism means that it is difficult to test coroutines that make use of channels!

I wanted to solicit advice for test-driven development of code that makes use of Channels. Or if this problem is too fundamental, what are some workarounds or alternative design patterns that can be used to get more conveniently testable asynchronous behavior?

I’m not sure how I see using channels as different from “normal” testing. If i’m testing a “large” function that calls 5 other functions that each call 10 other functions I don’t know “how” the large function produced the wrong value, I just know that is produced the wrong value.

Of course those 55 other functions are also tested individually, but if those tests didn’t find any errors and it’s only the combination of all of them that produced the error I still need to go hunting through the data flow to figure out “where” it went wrong. Maybe the individual tests are bad and missed something, maybe it’s only the combination of the functions…

One thing you could do is if your producer functions just take a “c” but don’t declare what a “c” is then you could create a “TestChannel” object and implement put!() for it. The put!() implementation could then verify the value being inserted into the “channel” is correct. If the value is bad then an error flag is set and stacktrace() to called to store the stack trace. Then in your @test you wouldn’t read the value from the channel it would just just check the error code on TestChannel and if it’s in error display the stack.

Or you might be able to do something like:

mutable struct TestChannel <: Channel

Then implement the put!, take!, fetch, etc methods for it. And both verify the values and push the data into/out of the channel…That would give you the channel and the ability to verify the values being inserted into it.

1 Like

I think that your point about testing a “large” function during “normal” testing might be related to the distinction between unit testing and integration testing, where unit testing focuses on simple atomic parts of the code and integration testing is more focused on macroscopic behavior.

The TestChannel idea reminds me of Python’s MagicMock class, which allows for the creation of a generic mock object that keeps a record of how its methods were called (i.e. what arguments were passed to its methods).