Testing multiple packages in a batch

I have multiple packages that I would like to test more or less automatically.
For an arbitrary one of these I would clone it, start Julia, and do:

cd("Package path")
using Pkg                                            
Pkg.activate(".")                                    
Pkg.test()

I have a suspicion that if I try to do this in a continuously running Julia session, processing an array of packages, it would not be the same as doing the above. In particular, I think that I would have to do dev to “clone” the package, and therefore the running Julia session would have some memory of the packages tested previously. Isn’t that right?

So I suppose a solution would be to process the list of packages in a loop, and fire up a fresh instance of a Julia process to run the test. Has anyone written a tool that would do this yet?

I could be wrong but can’t you do the following,

cd("Package path")
using Pkg                                            
Pkg.activate(".")                                    
Pkg.test()
Pkg.activate() #deactivate package

cd("Package path #2")                                            
Pkg.activate(".")                                    
Pkg.test()
Pkg.activate() #deactivate package

I think you might be onto something. Perhaps the activation will wipe the slate clean.

I think that’s how it works in a REPL not sure how it’d go in a Julia script or interactive session. Give it a try though - that’d be a quick fix if that’s what you want.

This should work because test creates a temporary environment. From the Pkg docs:

The tests are run by generating a temporary environment with only pkg and its (recursive) dependencies in it.

1 Like

I am not sure what you mean here, the REPL history would contain previous commands, but otherwise each Pkg.test creates its own temporary environment and a fresh process. See ?Pkg.test.

The loop could save the startup cost of the outer process, but otherwise test runs are pretty well isolated.

I think the key question is what you want to do with the results. Just report a bunch of flags for failure/success, keep the logs, etc.

This seems to work: https://github.com/PetrKryslUCSD/FinEtoolsTestAll.jl/blob/master/testall.jl

1 Like