I think one of the core ideas behind testing in Julia is that test/runtests.jl
should be a simple julia script that runs all tests when it is executed (either via include()
in an existing session, or running julia test/runtests.jl
in the command line, or via Pkg.test()
which sets up a “hardened” context to catch more errors).
My understanding is that this idea is reflected in the design of the Base.Tests
standard library, where it is assumed that tests are plain scripts, and evaluating @testset
stanzas should actually run the tests inside. This contrasts with the approach taken in pytest
(as I understand it), where tests are defined as functions following a certain naming convention. This means that evaluating them only defines the function, but does not execute any code; in order to actually run the tests, you need another, specific entry point, that does a bit of introspection to find all functions defining tests and actually call them. This preliminary stage presumably allows pytest
to nicely organize its outpout.
However, since Julia’s standard Base.Test
approach functions as a plain script execution, it can’t really perform such a kind of a global preliminary analysis… except if at some stage it has access to a whole set of tests defintions thanks to the @testset
macro. Maybe the macros could have been replaced by higher-order functions, but I doubt this would have produced a system where stack traces are more legible.
Are you aware of TestItemRunner
and the new unit tests UI in VS Code? It might be a better fit for your workflow (don’t mind the @testitem
high-level macro: AFAIU it merely acts as an annotation identifying test definitions, but is not really macro-expanded)