I just pushed out a new release of the Julia VS Code extension (1.7.6) to everyone that includes a preview of our new test UI and a general new testing framework for Julia.
For a short demo, take a look at our Juliacon talk https://youtu.be/Okn_HKihWn8?t=1268.
This new set of features consists of two new packages (TestItems.jl and TestItemRunner.jl) and the new UI in the Julia VS Code extension. A good way to refer to this new test framework is by the “test item” name.
How to write test items
The core feature in this new framework is that you can structure tests into @testitem
blocks and then individually run those, rather than having to run all your tests at once. A typical @testitem
might look like this:
@testitem "First tests" begin
x = foo("bar")
@test length(x)==3
@test x == "bar"
end
A @testitem
always has a name (here “First tests”) and then some code in a begin ... end
block. The code inside a @testitem
must be executable by itself, i.e. it can not depend on code that appears outside of the @testitem
, unless that code is somehow explicitly imported or included from within the @testitem
. There is one exception to this: the code inside the @testitem
will run inside a temporary module where using Test
and using MYPACKAGENAME
was already executed, so anything exported from either the Test
module or the package your are developing can be directly used. In the example above this applies to the foo
function (presumably defined in the package that is being tested) and the @test
macro.
@testitem
s can appear anywhere in a package. They do not have to be in the test
folder, nor do they have to be in a file that is included by test/runtests.jl
. In fact, @testitem
s can even be located inside your regular package code, for example next to the code they are testing. In that case you just need to take a dependency on the TestItems.jl package so that you have access to the @testitem
macro. If you have a package MyPackage
, then the file src/MyPackage.jl
could look like this:
module MyPackage
using TestItems
export foo
foo(x) = x
@testitem "First tests" begin
x = foo("bar")
@test length(x)==3
@test x == "bar"
end
end
If you don’t like this inline @testitem
style, you can also just put @testitem
blocks into Julia files in your test
folder.
Running test items inside VS Code
When you open a Julia package inside VS Code and have the Julia extension installed it will constantly (after every keypress!) look for any and all @testitem
s in your Julia files. If any are found, they will appear in various places in the UI.
You can find all detected @testitem
s in the Testing activity bar in VS Code:
The testing activity area then provides you with options to run individual @testitem
s, look at results etc.
VS Code will also place a small little run button next to each detected @testitem
in the text editor itself:
In addition to all these UI elements that allow you to run tests, there is also UI to display test results. For example, when you run tests and some of them fail, the extension will collect all these test failures and then display them in a structured way, directly at the place in the code where a specific test failed:
Especially when you run a lot of tests with large test files this makes it much easier to find the specific test that failed, no more hunting in the REPL for file and line information!
Running tests from the command line
This part is a little less fleshed out, but you can use the TestItemRunner.jl package to run @testitem
s as part of a traditional Pkg.test
workflow. This makes it easy to integrate these new types of tests with for example a continuous integration setup.
To enable integration with Pkg.test
for a package that uses @testitem
, you just have to do two things:
- Add TestItemRunner.jl as a test dependency to your package
- Put the following code into the package’s
test/runtests.jl
file:
using TestItemRunner
@run_package_tests
I hope that in the future we can make the TestItemRunner.jl package much more feature complete, for example add the ability to only run a subset of @testitem
s (as you can already do in VS Code), add support for parallel execution etc. Help is most welcome!
Under the hood
We already have some great testing packages in Julialand, and some of them (like the excellent ReTest.jl) seem to provide a lot of similar functionality already, so why create yet another testing framework?
The core reason is that we have a very different requirement for the VS Code extension: we need to detect test items at every keystroke. To do this, I used a completely different test detection strategy than any of the existing testing frameworks. In the test item design, no user code is run at all for test item detection. Test item detection is purely done by syntactic analysis of Julia source files. This kind of approach integrates really well with the existing analysis we have in the LanguageServer.jl that powers the rest of the Julia extension.
One nice side benefit is that the design of this test item framework is quite simple. For example, the definition of the @testitem
macro is almost hilariously simple, you can see it here. Having it so simple and not doing anything is ideal, because that means that adding inline tests into the actual package code should really not add any runtime overhead at all to a package (it does probably add a tiny bit of precompile time).
I also want to point out some more details about the test runner in VS Code. The design there is that we first detect in which environment a given @testitem
should run, then we spin up a test process per individual environment. These test processes are long run, i.e. once they have started they are reused whenever you execute another new @testitem
. By reusing these processes, we can cut out a huge amount of delay between running tests, rerunning an individual test item is often completely instantaneous. The Julia extension now also ships Revise.jl as part of the extension, and uses that to detect any code changes you make to the package code you are testing. The integration with Revise.jl is completely under the hood and automatic. For example, if you make a change to your package that Revise
cannot track (like redefining a struct), the extension will recognize that and automatically restart the test process instead of relying on Revise
to handle this update. The net effect of that is that you can just freely edit the code you are testing and the test code itself, rerun small parts of it all the time and there should be minimal delays in all of that.
Roadmap
While this feature ships in the regular Julia VS Code extension, we are declaring it a preview at the moment. We want to collect feedback on the design and usability for a while, potentially address design issues that might crop up, and only then will we declare it stable and released.
So, please try this out and let us know what you think! And if some folks want to help improve TestItemRunner.jl, that would be especially fantastic.
10/3/2022 Update
I just pushed out a new build of the extension to the insider channel everyone that brings a lot of new features along, and there is also a new corresponding TestItemRunner.jl version tagged. Please give these new versions a try and report back here or (especially for specific bugs) over in the issues on GitHub.
Here is a rundown of the new features:
Tags
You can now add tags to @testitem
s. Tags can be used both in the VS Code UI and via the TestItemRunner
to filter which test items you want to run.
The syntax for adding tags is this:
@testitem "My testitem" tags=[:skipci, :important] begin
x = foo("bar")
@test length(x)==3
@test x == "bar"
end
You can then filter the test list in the VS Code UI with these same tags:
And you can also use tags in test/runtests.jl
to filter down the list of tests that will run via the traditional Pkg.test
entry point:
using TestItemRunner
@run_package_tests filter=ti->!(:skipci in ti.tags)
Scroll down for a more complete description of the new filter
keyword for the @run_package_tests
macro.
Parallel test execution in VS Code
The VS Code extension has a new setting that controls how many Julia processes you want to use for parallel test execution:
The default value (for now) is 1, so you have to change that to use the parallel test execution feature. A value of 0
will use as many test processes as you have processors.
Once you configured more than one test process, individual @testitem
s will run in parallel, for example here there are four @testitem
s that are running at the same time:
There is a trade-off here: more test processes mean more memory is needed, and there is also potentially additional overhead to get all processes to spin up and be ready to actually run @testitem
s. I’m still playing around with this, any reports how this feature is working for you would be great!
Cancel support
The test UI in VS Code always had a button to cancel a test run:
The new feature is that it now works
Test duration
The extension now keeps track how long it took to run each individual @testitem
and displays that in the UI. You can even sort the @testitem
s by duration, very convenient to identify specific tests that are slow! This is how it looks:
Workspace integration
Test processes that are launched via this new test UI in VS Code are not automatically terminated, i.e. they hang around and take up memory and other resources. That of course has many benefits, namely that @testitem
s can be executed very quickly once the test process is up and running, but in some situations one might still want to simply terminate all currently running test processes.
To enable this, all test processes now show up in the Julia Workspace, alongside any REPL or Notebook processes that might also be running. And you can now terminate Julia test processes via this UI by clicking on the Stop Test Process
button. In this screenshot I have four test processes running:
Filtering support in TestItemRunner.jl
I had mentioned this feature already briefly in the section on tags: you can now pass a generic filter function to the @run_package_tests
macro to select which @testitem
s you want to execute. The example from above used tags to select which tests to run, but you can also filter based on the filename where a @testitem
is defined or the name of the @testitem
.
The way this works is that you can pass a filter function to the @run_package_tests
macro. This filter function will be called once for each @testitem
that is detected in your project, and the function must either return true
if this test item should be run or false
to not run it. @run_package_tests
will pass a named tuple with three fields to your filter function that contains meta information about the specific test item, namely the field filename
(the full path of the file where the @testitem
is defined), name
(the name of the @testitem
that you defined) and tags
(a vector of Symbol
s). With this information you can write arbitrarily complex filter conditions. For example, here I’m filtering out any @testitem
that has the :skipci
tag and I’m also only running tests that are defined in one specific file:
@run_package_tests filter=ti->( !(:skipci in ti.tags) && endswith(ti.filename, "test_foo.jl") )
Option for default imports
When you write a @testitem
, by default the package being tested and the Test
package are imported via an invisible using
statement. In some cases this might not be desirable, so one can control this behavior on a per @testitem
level via the default_imports
option, which accepts a Bool
value. To disable these default imports you, you would write:
@testitem "Another test for foo" default_imports=false begin
using MyPackage, Test
x = foo("bar")
@test x != "bar"
end
Note how we now need to add the line using MyPackage, Test
manually to our @testitem
so that we have access to the foo
function and @test
macro.
TestItemRunner.jl dependencies are vendored
The TestItemRunner
package uses packages like CSTParser
and Tokenize
internally. This can become a problem if you want to test a package that itself uses one of these packages. In particular, if your package required a different version of say CSTParser
than TestItemRunner
, then that did not work in the previous release. But, no more! The latest version of TestItemRunner
vendors all of its dependencies, so this problem is just gone. Julia’s package manager might not give us private packages, but the VS Code extension has long found a workaround for that and I’m now using the same technique here.
Smaller improvements
There were a number of smaller improvements as well:
- The message that the test module is being replaced is no longer shown.
- Test output is labeled a bit better.
- A lot of bug fixes for issues that you all reported, thanks so much for that!
Roadmap
All of this is still considered prerelease. I’m also slightly worried that I might have moved a bit too fast with this last batch of new features, i.e. I wouldn’t be too surprised if some new bugs have slipped in. Please test and report back over on GitHub or here!
I do think that in terms of features this is probably roughly what it will be when I declare all of this released. So at the moment I’m thinking we will just do testing from here on out, bug fixes and then declare a first version done. But of course, I might change my mind based on feedback or the weather or whatever