Built Julia by platform/arch

Hi,
Thank-you for the dev effort that goes into making Julia what it is.

Context:
Spinning up ephemeral containers/VM’s for running test suites. Linux primarily.

That developer effort means Julia is evolving rapidly.
Which means distro/packaged builds aren’t always available.
Even were they do become available, there is value to allowing power-users to tweak their builds.

This raises an issue in test/spec/behavior driven dev environments.

Wherein you’d like to be confident your test env reflects “your” julia build. The issue is the build time prohibits building Julia each time you spin up your test stack.
Also there is more than one version of julia ‘live’.
Furthermore some projects are fine running on ‘old’ versions, but package updates need to be tested. etc. etc.
There can easily be a lot of julia builds in play in-the-wild.

I doubt I’m the first to encounter this issue.

What seems to be ideal is a built ‘julia’ volume that is shared between containers/VM’s.

Has anyone beaten this path already?
How are people handling this?

I’m not quite sure what you’d like here. Do you want continuous integration on many different platforms and julia versions? If so, you should check out GitHub - invenia/PkgTemplates.jl: Create new Julia packages, the easy way which can generate CI configuration files for Travis (linux / OSX), Appveyor (windows), CirrusCI and gitlab.

These platforms have standard julia containers available and maintained, so you don’t need to manage the VM images / containers yourself. For example, having your package tested with julia-1.0 and julia-1.2 on Travis (linux+MacOS) is as simple as adding

julia:
  - 1.0
  - 1.2

to your .travis.yml file in your package repository (and you can generate that file using PkgTemplates)

Thank-you.
I was not aware of the templates in that project.

Apologies for the ambiguity.

To clarify what I had in mind:
There are use cases where you run some platform specific container on those CI services.
Inside that container is a julia snow-flake-build.
Inside that container you run some test/audit suite that relies on the julia snow-flake build on that platform (RHEL, Ubuntu, etc.).

I imagine those build services have some julia container or VM they launch for their infrastructure platform(s).
Whereas, the use case here are opensource projects where you’d like to be able to say platforms x1, x1, … xN. are green. Where N=14 for example.

Which is why I thought it seems the volume-build-image is the cleanest approach.
The container that built the julia snow-flake would have all the build artifacts, either baked in or via some launch script. I’d like to avoid that - if possible.

This is pretty much what you get from Travis et al.: It’s free for open source projects, the community maintains it (see the maintainers mentioned at Building a Julia Project - Travis CI and the build script), and it’s got a good range of julia versions available. For example, see the CI build results at Travis CI - Test and Deploy with Confidence

If you want VMs which are set up to build and package Julia itself (eg, if you want to build julia itself on various linux distros), I’m not sure whether there’s a range of those publicly available; you may need to create them yourself.

Did you have a look at what’s available at Docker (also accessible from the standard julia download page).

Thanks @c42f.
I hadn’t read the Travis build script, but it does what is the alternative approach to setting up a docker volume: Just build and host the Julia snowflake tar file, and write a script like @ararslan has.
Where it loses the advantages of its simplicity is in composing containers from multiple layers.
Although even there it could be made to fit-in - it’s just that you’d be rewriting in shell scripts much of docker’s functionality.

The docker-library/ julia likewise would require making and hosting some snowflake tar.
I think I prefer @ararslan’s approach of integrating using shell rather than leaning on the package managers - but will have to see how it works out.

Thanks again.