ANN: Nanosoldier package evaluation -- with badges!

Hi all,

We’ve been working on improving the new package evaluator (aptly named NewPkgEval.jl) as a tool for testing all registered Julia packages against changes to Julia itself. This functionality has been integrated in Nanosoldier.jl, originally created to do the same but for performance regressions. A simple @nanosoldier runtests(...) (see the README for more details) can now be used to test Julia against all packages, see here for an example.

Once per day, we also run an evaluation of all packages against the current master branch of Julia and compare against the previous daily evaluation. The results of this evaluation are put in the NanosoldierReports repository, and are also used to generate badges that you can use in your package’s README. For example:

PkgEval

# change E/Example to the initial and name of your package
[pkgeval-img]: https://juliaci.github.io/NanosoldierReports/pkgeval_badges/E/Example.svg
[pkgeval-url]: https://juliaci.github.io/NanosoldierReports/pkgeval_badges/report.html

[![PkgEval][pkgeval-img]][pkgeval-url]

However, the latest daily evaluation only successfully tested 1845 out of 2922. Over 800 packages failed tests, either because of legitimate test failures or due to issues with the package evaluator. Packages that fail tests cannot be taken into account when evaluating changes to Julia.

It would greatly improve the effectiveness of PkgEval if more packages would pass tests on it!

To make sure your package works with NewPkgEval.jl:

  • Open the latest report and locate your package. There’s a section for every status (success, fail, skip), and results are further grouped according to more specific reasons.

  • If your package fails tests, please fix those :slight_smile: The NewPkgEval README explains how to replicate the test environment; you only need to install Docker and have permissions to launch containers (try docker run hello-world).

  • If your package is missing a dependency, you can just install that using apt. NewPkgEval uses a plain Ubuntu-based image; refer to the README for more details.

  • If your package takes too long to test (> 1 hour), you can alter its behavior by checking the environment: NewPkgEval sets the CI, PKGEVAL and JULIA_PKGEVAL variables to true.

  • If you don’t want or can support NewPkgEval, e.g. because of unsatisfiable binary dependencies, you can always blacklist the package.

33 Likes

It would be great if packages were listed as username/Packagename.jl and org-name/Packagename.jl instead of only the package name, for users or orgs that have many packages to search for.

5 Likes

Awesome stuff, kudos especially for categorizing the reasons for failure.

It’s a good thing you are doing this now; at the current rate, pretty soon “looking through the list and finding your packages” is not going to be easy without search tools :slightly_smiling_face:

Hopefully you can cross 4 off the list: PaddedViews, ImagineFormat, NRRD, and MetaImageFormat. UnalignedVectors is not necessary anymore (due to ReinterpretArray), and nothing in the general registry depends on it. If we move it to JuliaAttic is it automatically blacklisted, or do we need to do that anyway?

1 Like

It’d be nice if there is a method of NewPkgEval.run that takes a vector of PackageSpecs rather than names. This way, package authors can make sure that their test suite is NewPkgEval-friendly without releasing it.

1 Like

That information is not generally known by the registry.

No, we currently try to install all packages from General if there’s any release compatible with the Julia version being tested. What about adding an upper-bound on julia to those packages? We could always maintain a list of “deprecated” packages manually, of course.

That’s a good suggestion, I’ll make an issue for it.

2 Likes

Isn’t that information encoded in the repo url?
repo = "https://github.com/username/PackageName.jl.git"

4 Likes

I created a package to generate a dashboard with all your badges to give a quick overview

3 Likes