Automatically run unit tests

Is there a way to automatically re-run the tests of a package when one of the dependencies gets updated?
I have a large package that I did not touch a few months, and now I see three dependencies were updated such that my code breaks.
If my tests would run more often it would be easier to see which update creates a problem and also fixes could be in place much faster.

Any suggestion?

Because nobody has a suggestion I want to rephrase my question:

If automated tests of a package when one of the dependency is changing are not possible, should we hard-code upper limits for the dependencies to avoid braking?

In the last four months my main package KiteModels was breaking four times due to updates to direct and indirect dependencies. I had to add:

[compat]
StaticArrays = "=1.2.5, ~1.3.1, ~1.4, <1.5.13"
Sundials = "~4.11"
DiffEqBase = "<6.116.0"

to my compat section, and in addition to add a work-around for Plots stopping to show diagrams when using a package image.

DiffEqBase is not even a direct dependency of my project, but a dependency of Sundials, and older versions of Sundials do not work with the newest version of DiffEqBase any longer.

How do other package authors solve these issues? Just hardcode upper limits for the packages you are using? But even this might not solve the problem because indirect dependencies might get updated that break your code.

So I think we need a better testing infrastructure.

Compat entries in the Project.toml file, adherence to semantic versioning by everyone and only using exported API of your dependencies (because that’s what semver is supposed to cover) should mean that things don’t break like this.

Since you have compat entries, it’s probably the either one or both of the latter points that are at fault here :slight_smile:

Anyhow, yes, you have (at least) two options in principle:

  1. Tighten your compat bounds (upwards).
  2. Check in a manifest file to indicate what combinations of packages worked.

Edit: another suggestion: you can also run your tests weekly (or whatever suits) at a specific time (rather than triggered by an update by a dependency). That’s then at least a temporal checkpoint…

1 Like

That’s a bad joke. You cannot define an API in Julia properly because we cannot define proper interfaces. One of the reasons that packages I use are braking is that I work a lot with MVectors, and most packages do not have unit tests for MVectors. They accept abstract vectors, and most of the time MVectors work until they break… So are MVectors part of the API or not, how would you define that?

1 Like

How can I run my tests weekly on Github?

Well changes in the public API (exported functions) should be non-breaking in patch versions, at least that is intended that way.
That the interoperability does not always work is a pity, sure. The packages you depend on could do integrations tests against MVector to avoid that, maybe?
So at least in „Interface-packages“ I have seen that every now and then that they do not only to their normal tests but also integration tests., which check (in a minimised test of course) that packages affected by / using the interface will not break.

To run ever week you can extend. the on entry not only to run on pushes to master but also on time, see Run your GitHub Actions workflow on a schedule | Jason Etcovitch

2 Likes

With Integration I mean something like we do in Manifolds.jl:

Manopt.jl depends on Manifolds.jl (weakly at least) so we also run Manopt tests in the CI of Manifolds.

I would prefer to more have a minimalistic (maybe overview-type) test one can trigger „forward“, but for the scenario you speak about adding tests in the packages that break your code that also test for (a bit of) MVector.