New stable features for the test item framework

I released a couple of new stable features for the test item framework in the last weeks. This post will describe them.

But before I dive into the new features, a quick recap what the test item framework is! The main benefit of the test item framework is that you can very easily split your tests into self-contained small test items, and then run them individually on demand. You can read my previous post about the framework here, watch a short (outdated!) YouTube demo here or take a look at the documentation here.

I should also note that the framework is no longer in preview mode. It has been used heavily for a number of packages and has been production ready for many years now. You can and should use it for your “real” projects!

Ok, and with that, here are the new features:

Sharing code across @testitems

By default @testitems do not share any code between each other and have no dependencies between each other. These properties make it feasible to run @testitems by themselves, but sometimes one wants to share common code between multiple @testitems. The test item framework provides two macros for this purpse: @testsnippet and @testmodule. These two macros can appear in any .jl file in a package.

Test snippets

A @testsnippet is a block of code that individual @testitems can run before their own code runs. If a @testitem takes a dependency on a particular @testsnippet, that snippet will run every time the @testitem runs.

The definition of a @testsnippet might look like this

@testsnippet MySnippet begin
    foo = "Hello world"
end

A @testitem can utilize this snippet by using the setup keyword like this:

@testitem "My test item" setup=[MySnippet] begin
    @test foo == "Hello world"
end
Test modules

A @testmodule defines a Julia module that can be accessed from @testitems. Such a module will only be run once per Julia test process. If for example two @testitems depend on a @testmodule, it will only be evaluated once, and then the entire module will be made available to both @testitems.

The definition of a @testmodule might look like this

@testmodule MyModule begin
    foo = "Hello world"
end

A @testitem can utilize this module by again using the setup keyword. Unlike with @testsnippets, the content of a @testmodule is run inside a regular Julia module, so to access content inside there one needs to prefix the module name to any name defined in the test module. A @testitem that utilizes the @testmodule just defined might look like this:

@testitem "My test item" setup=[MyModule] begin
    @test MyModule.foo == "Hello world"
end

Note how we access foo with the expression MyModule.foo here.

Debugging of @testitems

@testitems can be run in the debugger by launching them via the Debug Test command. This command can be access in various places in the VS Code UI. In the test main testing view it is available here:

One can also right click on the run test icon in the text editor to select the debug option:

When a test item is run in the debugger, one can set breakpoints both in the code that is being tested or in the @testitem itself and then utilize all the regular features of the Julia VS Code debugger.

Code coverage

On Julia 1.11 and newer one can run test items in a code coverage mode and display code coverage results directly in VS Code.

To run test items in code coverage mode one launches them with the command Run Tests with Coverage. This command is availble both in the main testing view

as well as in the context menu in the text editor:

The coverage results are then displayed in various ways in the VS Code UI. For example a summary view shows coverage per file:

One can see detailed line coverage information inside the text editor:

Coverage results are also displayed inline in the regular explorer part of the VS Code UI.

Documentation

I wrote documentation :slight_smile: For the whole thing, mostly copy paste from this and past discourse announcements, you can find it here.

35 Likes

Super exciting. Thank you for working on this :clap:t3:

Awesome stuff!

Oh, and I forgot to mention one other new thing: I fixed a bug where in the past the test runner system would not pick up any changes to your project files. The only solution around that was to restart VS Code entirely. That should be fixed now, i.e. any edits to project files should be picked up automatically and shouldn’t require any kind of restart.

8 Likes

@davidanthoff is there a solution for when a test suite needs a setup that is a function of a variable?

Suppose we have setup(T) where T can be either Float32 or Float64. We want to run tests in both setups:

T = Float32
for testfile in testfiles
  include(testfile)
end

T = Float64
for testfile in testfiles
  include(testfile)
end

where each testfile is full of

@testitem "basic" setup=[setup(T)] begin
  # tests go here
end

Woah! I think I really have to try this on one of my packages and “move over” to this from my dull plain tests; especially the sharing of code and the coverage looks awesome!

So you can’t parameterize the code that gets included, but you can of course for example define functions in both @testsnippets and in @testmodules that take arguments and then call them. Would that do the trick?

We managed to update to TestItemRunners.jl in Meshes.jl using GitHub actions with different env variables to vary the settings. It is working nicely! :heart:

1 Like

These are great new features, I especially appreciate the test item debugging.

I have been using a @testitem macro for a while, not from TestItems.jl but ReTestItems.jl. These macros seem mostly compatible, so I can still use the VS Code test UI. My main motivation for using ReTestItems.jl was running testitems in parallel on multiple worker processes. Does TestItemRunner.jl support parallel execution? I don’t see it in the README, though the julia-vscode docs do mention it here.

I could only find this issue about unification Figure out how to unify with VS Code testitems · Issue #18 · JuliaTesting/ReTestItems.jl · GitHub but it’s labeled speculative.

1 Like

Haven’t tried it yet because testitem coverage will only be supported for the next Julia version, but wonder:

  • How coverage works when running one testitem at a time? And when rerunning a single testitem after changing it?
  • How does it compare to the Run Test task -> Run tests with coverage feature that has been present for years already? Is it the same but for individual testitems, or something different?

It’s a somewhat complicated answer :wink:

TestItemRunner.jl does not support parallel execution, and most likely never will.

The VS Code extension has supported parallel execution for a long time. It does speed things up relative to sequential, but I should also note that the algorithm that is shipping right now in the extension is not the most efficient one. I’m in the process of implementing something better there right now, it makes a huge difference. No promise when that will ship :slight_smile:

There is a very experimental TestItemRunner2.jl that supports parallel execution. But be warned, that package will go away over time and be replaced with something different. It is not part of the stuff that I consider “production ready”. But there will be a non-UI way to run things in parallel hopefully soon in that spirit.

You can also take a sneak-peek at GitHub - julia-vscode/testitem-workflow. Also not ready for prime-time, but it already provides parallel test execution on GitHub workflows (and many more benefits!). Will get its own post soon.

I don’t really know what the plans are for ReTestItems.jl, but my sense is that it has diverged from the original test item framework, it is probably best seen as a fork that is going in a different direction. Probably would have been better to give it a different name to avoid confusion :slight_smile:

1 Like

Julia 1.11 adds an API to clear collected coverage data during runtime. So we clear the entire coverage data before each test run, and then only collect new coverage data.

I’m not 100% sure how reliable that really is. There is probably some stuff like constant propagation etc where this won’t work a 100%, so we might have to tweak a bit as we test this in the wild.

Completely different, the two implementations share nothing.

Hi @davidanthoff, is this the actual cause of this issue?

And is there then a way to see total coverage for a package testsuite without having to put everything in a single @testitem?

Interesting! I like the Run tests with coverage – works with any Julia version, with testitems or with regular testsets, shows coverage for the full testsuite (re @disberd’s question).
That’ll remain in the extension, right?

So, coverage is always per test run. Every time you click the “Run” button, it starts a new test run. If you run test items individually, then you start a new test run every time, and coverage will be for each run and only include the results for that one test item. But when you run multiple test items in one test run, then coverage for all of them should be aggregated already. You can run multiple test items in one test run via the UI in the Testing pane, either by clicking on the button “Run tests” at the top, or by clicking the run tests button on any folder in the test tree. If coverage isn’t aggregated in those cases, then it would be a bug (but I think it is).

Yes, no plans to remove that. A few other random points on that:

  • the test item framework also works with any Julia version starting at 1.0
  • we should actually probably add a hook that if a package is using the test item framework, that these old commands trigger runs in the new framework…
  • I think we could actually also integrate the results of these old-style coverage runs with the new coverage UI in VS Code. If someone wants to tackle that, join the vscode-dev channel on Slack and ask for help!

Thanks for the reply, so it must be a bug as I found no way to aggregate across testitems within a single run, but we can continue the discussion on the linked issue

1 Like

But not coverage, right? When trying to run it on latest released versions of everything, I get the message that only 1.11 is supported.

Oh, there’s another coverage UI in VSCode? The Run tests with coverage function plays nicely with the “Coverage Gutters” extension that supports basically any language (through lcov files).

Ah, yes, coverage in VS Code for test items only works on Julia 1.11 and newer. Coverage on GitHub for test items works for any Julia version.

Yes, VS Code added a native coverage UI/API a couple of releases ago. I’m using that already for showing the test item coverage, and it should be pretty simple to actually also utilize that API for the Run tests with coverage task that has been around forever. Main benefit would just be that one could see coverage without having to install another extension.