I am a big fan of Py.Test
in Python, and Base.Test
does not really satisfy me in several respects. In particular, there is no test filtering, one cannot conveniently report non-boolean results from tests (e.g. benchmark values), and there is only basic support for fixtures. I am aware of PyTest.jl
, but it has chosen the way of extending Base.Test
, which means it shares its limitations (no automatic test collection, limited lifetime control for fixtures, tests are executed as the files are included, so no advanced control of that; more on that later).
So I put together a little proof-of-concept package, and my question is, is it interesting to anybody? If this is the case, any advice/feature requests/bug reports are welcome. It is my first Julia project, so it certainly will have a lot of stuff to fix.
Brief info
Basically, the it operates as follows. All the files satisfying a specific pattern in the test
directory are picked up and included. All Testcase
objects in the global scope are collected. Grouping is done by putting these objects in submodules.
A testcase is defined like:
tc = testcase() do
@test 1 == 1
end
(Jute
uses Base.Test
assertions)
Testcases can be parametrized:
parameterized_testcase = testcase([1, 2], [3, 4]) do x, y
@test x + y == y + x
end
Parametrization can be done by iterables or by special fixture objects. There are two types currently available, local ones (set up and destroyed right before and after the testcase), and global ones (set up and destroyed only once). For example:
db_connection = fixture(; delayed_teardown=true) do produce
c = db_connect()
# this call blocks until all the testcases
# that use the fixture are executed
produce([c])
close(c)
end
db_testcase = testcase(db_connection) do c
do_something_with(c)
end
More detailed info can be found in the project’s README
(proper docs will appear soon).
Future possibilities
This concept, in principle, allows the following features to be implemented:
- Automatic multi-process test run, with the fixtures initialized only in the processes where they are used
- “Exclusive” fixtures, which guarantee that two values of that fixture do not exist in one process simultaneously, and the amount of setups and teardowns is minimal (can be used e.g. for some C library that requires setting a global state)
- Watching test files and rerunning the changed testcases
- Test tagging and filtering by tags
- Allowing user to add their own command-line arguments
Current problems
- Test pick up and execution is a bit slow at the moment, because I load test files dynamically (by
eval
inginclude
s), and, consequently, have to useinvokelatest()
. This can be avoided by starting a child process that includes all the required files statically. - Code is messy and there are no docs. Also, tests do not cover much. This, of course, will be fixed.