Julia for HEP now available via CVMFS

I’m happy to announce that we now have a HEP (particle physics) Julia distribution available via CVMFS (thanks for all the friendly help here!).

Two container images

The containers are currently built for x86_64 only, but we’ll definitely add an ARM version as well.

juliahep-base specifically should make for a nice entry point for users - it doesn’t tell users what packages to use, but packs a large bag of important packages in an internal Julia package depot, ready and precompiled. So when users install packages, a lot of stuff is “already there” and will be re-used. This is important because on many HEP computing systems, users have limited home directory quotas and are on networked file systems that don’t like to deal with very large numbers of files.

The folks that run the unpacked.cern.ch CVMFS repository have kindly agreed to host the containers images (unpacked) on CVMFS (automatically synchronized via unpacked / sync · GitLab). So now we can run apptainer shell /cvmfs/unpacked.cern.ch/registry.cern.ch/juliahep/juliahep-base\:latest on any system with Apptainer and CVMFS and Julia is there.

But we don’t even need Apptainer (or another container runtime): One cool thing about Julia is that external dependencies are provided down to basically libc-level. So it’s possible to run the Julia from the unpacked image on CVMFS directly, bare-metal (via a little included wrapper script), on basically any modern Linux with CVMFS:

$/cvmfs/unpacked.cern.ch/registry.cern.ch/juliahep/juliahep-base\:latest/unpacked/bin/julia

julia> DEPOT_PATH
3-element Vector{String}:
 "/home/username/.julia-juliahep-base"
 "/cvmfs/unpacked.cern.ch/.flat/b" ⋯ 68 bytes ⋯ "pt/julia-1.11/local/share/julia"
 "/cvmfs/unpacked.cern.ch/.flat/b" ⋯ 62 bytes ⋯ "833c/opt/julia-1.11/share/julia"

An advantage of running the unpacked image directly, without Apptainer (besides less complexity) is that one doesn’t have to use tricks to run the VSCode-server-component inside of the Apptainer instance, for VSCode to be able to read/show files in the depot included in the container image.

Note: This “bare-metal” way of running Julia from CVMFS is not quite “perfect”, as source-file lookup for packages provided via the container images will point to /opt/... instead of /cvmfs/.../opt/.... It’s more of an annoyance (when users want to dig through where stuff comes from, etc.), but maybe something can be done about it.

6 Likes

Hi @oschulz, thanks for the news and the effort.
More and easier ways to run Julia the better.

What is a difference between,

/cvmfs/unpacked.cern.ch/registry.cern.ch/juliahep/juliahep-base:latest/unpacked/bin/julia

and distributions in

ls /cvmfs/sft.cern.ch/lcg/releases/julia/

?

We need a hep-related blog post “How I set up my julia environment…”

/cvmfs/sft.cern.ch/lcg/releases/julia/ are only julia releases for various operating systems. I’m not sure who maintains them, actually, they seems to lag behind, release-wise (still at 1.11.3). They seem to be custom builds, at least some of them, there is a julia.exe in bin for the Linux releases - though that binary seems to be identical across x86_64-el9-gcc11-opt, x86_64-ubuntu2404-gcc13-opt, etc.

The new juliahep builds use the offical Julia binaries, and come as full container images. The main point is to have a distribution that prepacks a lot of dependencies, so people can get up and running without a lot of downloading and precompiling on “slow” (for many small files, that is) network file systems.

This is all brand-new, and we still have to figure out how to automatize builds, etc. I put it together for the Julia part of the Advanced Programming Concepts 2025 school at DESY last week, in case participants want to work at DESY NAF (20 GB home dir quota), CERN lxplus, etc.

For example, users on Linux systems with CVMFS any user can now just do

$ /cvmfs/unpacked.cern.ch/registry.cern.ch/juliahep/juliahep-base\:latest/unpacked/bin/julia

(@v1.11) pkg> activate @hep-base
(@hep-base) pkg>
julia> using CairoMakie

Needs some time for CVMFS to cache things, but zero package downloads and zero precompilation. And, when using your own environment instead of the prepacked hep-base env:

$ /cvmfs/unpacked.cern.ch/registry.cern.ch/juliahep/juliahep-base\:latest/unpacked/bin/julia

julia> import Pkg
julia> Pkg.activate(joinpath(first(DEPOT_PATH), "environments", "v$(VERSION.major).$(VERSION.minor)"))
  Activating new project at `~/.julia-juliahep-base/environments/v1.11`

(@v1.11) pkg> add CairoMakie
julia> using CairoMakie

also zero package installation. Though now it recompiles CairoMakie, FFMPEG_jll, LERC_jll, Libtiff_jll, libwebp_jll, Makie and WebP for some reason, even though $JULIA_CPU_TARGET is the same as when building the container. Haven’t figured out why yet - for most packages it happily uses the precompilation output stored the container image. For example, for

$ /cvmfs/unpacked.cern.ch/registry.cern.ch/juliahep/juliahep-base\:latest/unpacked/bin/julia

julia> import Pkg
julia> Pkg.activate(joinpath(first(DEPOT_PATH), "environments", "v$(VERSION.major).$(VERSION.minor)"))
  Activating new project at `~/.julia-juliahep-base/environments/v1.11`

(@v1.11) pkg> add Zygote
julia> using Zygote

there is zero package installation and zero precompilation, it takes everything from the container image.

No idea what’s different for CairoMakie (compiler experts, can you help?).

it’s not custom. I maintain them more or less (I simply pin them when there’s a new release, I don’t have admin access to LCG storage itself), and we download official Julia binary. the .exe is because we have a script to unset LD_LIBARAY_PARH so we need a different name for Julia binary file

Thanks Jerry, good to know! Yes, unsetting $LD_LIBARAY_PATH is also what I do in the /unpacked/bin/julia wrapper script.