Julia packages with Singularity

How do we create Singularity containers which contains a reproducible set of packages which are installed at countainer build time?
Singularity https://www.sylabs.io/

This discussion should follow on from a thread on the Singularity email list, which is asking some good questions but I feel was getting away from strict discussions about that technology.

1 Like

Any chance you can link to the mailing list archives, and possibly provide snippets of the relevant parts of the discussion for those of us who aren’t subscribed?

@jpsamaroo But of course. I shoudl have doen that. See here, it is quite a long thread.

Redirecting to Google Groups

Thanks! So, let me see if I understand what’s going on. My Singularity knowledge is rusty and very basic, so please correct me as necessary:

  • You have a container that has julia installed in it (maybe from docker, maybe custom built in Singularity)
  • You then start it up as a non-root user, and when that user attempts to install packages, julia attempts to precompile into files that are under /root, so you get permission denied (aside: does this also occur when sync’ing the registry?)
  • So now instead, you need to workaround by either running everything as root (100% not a good idea, and not what one does in a Singularity container), or by bind mounting a directory on the outside that was built by the appropriate user and/or is mounted with the invoking user set as owner of all files

Note that, as you’ve said:

create a container with all the build tools which are needed

this is definitely correct in all circumstances, if you need to add packages which build binaries, like HDF5. Anything using BinaryProvider alone will probably not need these, but it would make the most sense to have them installed anyway.

Does this all sound accurate to you?

You have it right!

  • You then start it up as a non-root user, and when that user attempts to install packages, julia attempts to precompile into files that are under /root, so you get permission denied (aside: does this also occur when sync’ing the registry?)
    This is not correct - the original poster there said that he installs the packages as root, and has permissions problems / crashes.

I guess the discussion I would like to broaden is:
As you say, the best advice method would be to install the required build tools in the container.

My thoughts of using a bind mount of your home directory negate one advantage of containers - that the installed software is reproducible, as over time you will update your personal packages.
However I ask - if the container is immutable, Julia tries to update packages anyway when it starts ???

There was a question asked yesterday on the Singularity list, regarding building Tensorflow.
the replay was that multi stage builds will probably come out with the 3.2.0 release.
Maybe that is what we need. Redirecting to Google Groups

I’m also interested in this topic. I help to maintain a bioinformatics tool called ‘MentaLiST’ that is written in Julia:

https://github.com/WGS-TB/MentaLiST

I’ve also struggled to build Docker and/or Singularity images for similar reasons.

This is where I’m at so far but I’m also getting some permissions errors when attempting to run the image.

https://gist.github.com/dfornika/53d593a535421115f01d90b8ef69ad2b

I’m also working on bioinformatics tools, and I would like to be able to containerize them with singularity.

For the moment, I have only written one tool with Julia, it works on my local workstation, but it is still very unclear to me how such things should be organized and distributed.

For my container, I start with an official docker image that provides Julia. My current issue is how to get the dependencies of my project. Here is what I tried:

Bootstrap:docker
From:julia:1.2-buster

%post
    apt-get update -y
    apt-get install -y git
    mkdir -p /usr/local/src
    cd /usr/local/src
    git clone https://qaf_demux:KU13FfM3kLyeCrWpD2ZG@gitlab.pasteur.fr/bli/qaf_demux.git
    cd qaf_demux/Julia/QafDemux/
    /usr/local/julia/bin/julia --project=. --eval 'import Pkg; Pkg.instantiate()'
    apt-get remove -y git
    apt-get autoremove -y
    apt-get clean -y

%environment
    export LC_ALL=C
    export PATH=/usr/local/src/qaf_demux/Julia/QafDemux/bin:"${PATH}"

%runscript
    exec /usr/local/src/qaf_demux/Julia/QafDemux/bin/qaf_demux.sh "$@"

The build crashes as follows:

+ cd qaf_demux/Julia/QafDemux/
+ /usr/local/julia/bin/julia --project=. --eval import Pkg; Pkg.instantiate()
   Cloning default registries into `~/.julia`
   Cloning registry from "https://github.com/JuliaRegistries/General.git"
     Added registry `General` to `~/.julia/registries/General`
  Updating registry at `~/.julia/registries/General`
  Updating git-repo `https://github.com/JuliaRegistries/General.git`
ERROR: Package FASTX [c2308a5c-f048-11e8-3e8a-31650f418d12] not found in a registry.
Stacktrace:
 [1] pkgerror(::String) at /buildworker/worker/package_linux64/build/usr/share/julia/stdlib/v1.2/Pkg/src/Types.jl:112
 [2] check_registered(::Pkg.Types.Context, ::Array{Pkg.Types.PackageSpec,1}) at /buildworker/worker/package_linux64/build/usr/share/julia/stdlib/v1.2/Pkg/src/Operations.jl:924
 [3] up(::Pkg.Types.Context, ::Array{Pkg.Types.PackageSpec,1}, ::Pkg.Types.UpgradeLevel) at /buildworker/worker/package_linux64/build/usr/share/julia/stdlib/v1.2/Pkg/src/Operations.jl:1043
 [4] #up#43(::Pkg.Types.UpgradeLevel, ::Pkg.Types.PackageMode, ::Bool, ::Base.Iterators.Pairs{Union{},Union{},Tuple{},NamedTuple{(),Tuple{}}}, ::typeof(Pkg.API.up), ::Pkg.Types.Context, ::Array{Pkg.Types.PackageSpec,1}) at /buildworker/worker/package_linux64/build/usr/share/julia/stdlib/v1.2/Pkg/src/API.jl:167
 [5] #up#38 at ./none:0 [inlined]
 [6] #up at ./none:0 [inlined]
 [7] #instantiate#81(::Nothing, ::Bool, ::Base.Iterators.Pairs{Union{},Union{},Tuple{},NamedTuple{(),Tuple{}}}, ::typeof(Pkg.API.instantiate), ::Pkg.Types.Context) at /buildworker/worker/package_linux64/build/usr/share/julia/stdlib/v1.2/Pkg/src/API.jl:463
 [8] instantiate at /buildworker/worker/package_linux64/build/usr/share/julia/stdlib/v1.2/Pkg/src/API.jl:461 [inlined]
 [9] #instantiate#80 at /buildworker/worker/package_linux64/build/usr/share/julia/stdlib/v1.2/Pkg/src/API.jl:458 [inlined]
 [10] instantiate() at /buildworker/worker/package_linux64/build/usr/share/julia/stdlib/v1.2/Pkg/src/API.jl:458
 [11] top-level scope at none:1
FATAL:   failed to execute %post proc: exit status 1
FATAL:   While performing build: while running engine: exit status 255

If I simply run the original image, as non-root user, it seems that there is no problem finding the required packages:

$ singularity shell docker://julia:1.2-buster
Singularity julia_1.2-buster.sif:~/src/qaf_demux/Julia/QafDemux> cd tmp/qaf_demux/Julia/QafDemux/
Singularity julia_1.2-buster.sif:~/src/qaf_demux/Julia/QafDemux/tmp/qaf_demux/Julia/QafDemux> julia --project=. --eval 'import Pkg; Pkg.instantiate()'
  Updating registry at `~/.julia/registries/BioJuliaRegistry`
  Updating git-repo `https://github.com/BioJulia/BioJuliaRegistry.git`
  Updating registry at `~/.julia/registries/General`
  Updating git-repo `https://github.com/JuliaRegistries/General.git`
 Resolving package versions...
  Updating `~/src/qaf_demux/Julia/QafDemux/tmp/qaf_demux/Julia/QafDemux/Project.toml`
  [c7e460c6] + ArgParse v0.6.2
  [944b1d66] + CodecZlib v0.6.0
  [e30172f5] + Documenter v0.23.3
  [c2308a5c] + FASTX v1.1.0
  [c8e1da08] + IterTools v1.2.0
  [3bb67fe8] + TranscodingStreams v0.9.5
  [44cfe95a] + Pkg 
  Updating `~/src/qaf_demux/Julia/QafDemux/tmp/qaf_demux/Julia/QafDemux/Manifest.toml`
  [c7e460c6] + ArgParse v0.6.2
  [67c07d97] + Automa v0.7.0
  [b99e7846] + BinaryProvider v0.5.6
  [47718e42] + BioGenerics v0.1.0
  [7e6ae17a] + BioSequences v2.0.0
  [3c28c6f8] + BioSymbols v4.0.1
  [944b1d66] + CodecZlib v0.6.0
  [861a8166] + Combinatorics v0.7.0
  [34da2185] + Compat v2.1.0
  [864edb3b] + DataStructures v0.17.0
  [ffbed154] + DocStringExtensions v0.8.0
  [e30172f5] + Documenter v0.23.3
  [c2308a5c] + FASTX v1.1.0
  [1cb3b9ac] + IndexableBitVectors v1.0.0
  [c8e1da08] + IterTools v1.2.0
  [682c06a0] + JSON v0.21.0
  [bac558e1] + OrderedCollections v1.1.0
  [69de0a69] + Parsers v0.3.7
  [f27b6e38] + Polynomials v0.5.2
  [b718987f] + TextWrap v0.3.0
  [3bb67fe8] + TranscodingStreams v0.9.5
  [7200193e] + Twiddle v1.1.1
  [2a0f44e3] + Base64 
  [ade2ca70] + Dates 
  [8bb1440f] + DelimitedFiles 
  [8ba89e20] + Distributed 
  [b77e0a4c] + InteractiveUtils 
  [76f85450] + LibGit2 
  [8f399da3] + Libdl 
  [37e2e46d] + LinearAlgebra 
  [56ddb016] + Logging 
  [d6f4376e] + Markdown 
  [a63ad114] + Mmap 
  [44cfe95a] + Pkg 
  [de0858da] + Printf 
  [3fa0cd96] + REPL 
  [9a3f8284] + Random 
  [ea8e919c] + SHA 
  [9e88b42a] + Serialization 
  [1a1011a3] + SharedArrays 
  [6462fe0b] + Sockets 
  [2f01184e] + SparseArrays 
  [10745b16] + Statistics 
  [8dfed614] + Test 
  [cf7118a7] + UUIDs 
  [4ec0a83e] + Unicode

If I try similar thing as root, then the same failure to find package FASTX happens as in the %post phase of the custom image build.

Thanks to this stackoverflow answer, I now have a working definition file:

Bootstrap:docker
From:julia:1.3-buster

%post
    apt-get update -y
    apt-get install -y git
    apt-get install -y wget
    mkdir -p /usr/local/src/git-lfs
    cd /usr/local/src/git-lfs
    wget https://github.com/git-lfs/git-lfs/releases/download/v2.8.0/git-lfs-linux-amd64-v2.8.0.tar.gz
    tar -xvzf git-lfs-linux-amd64-v2.8.0.tar.gz
    ./install.sh
    cd ..
    git clone https://gitlab.pasteur.fr/bli/qaf_demux.git
    cd qaf_demux/Julia/QafDemux/
    /usr/local/julia/bin/julia --project=. --eval 'import Pkg; Pkg.Registry.add(Pkg.RegistrySpec(; url="https://github.com/JuliaRegistries/General.git")); Pkg.Registry.add(Pkg.RegistrySpec(; url="https://github.com/BioJulia/BioJuliaRegistry.git")); Pkg.activate("."); Pkg.instantiate()'
    export PATH="/usr/local/julia/bin:${PATH}"
    export PATH=/usr/local/src/qaf_demux/Julia/QafDemux/bin:"${PATH}"
    # Test and trigger pre-compiling?
    qaf_demux.sh --help > /dev/null
    apt-get remove -y git
    apt-get autoremove -y
    apt-get clean -y

%environment
    export LC_ALL=C
    export PATH=/usr/local/src/qaf_demux/Julia/QafDemux/bin:"${PATH}"

%runscript
    exec /usr/local/src/qaf_demux/Julia/QafDemux/bin/qaf_demux.sh "$@"

However, it seems that the pre-compiling step by root is useless, because when I try to run the container, the same compilation steps happen again:

[ Info: Compiling bit-parallel GC counter for LongSequence{<:NucleicAcidAlphabet}
[ Info: Compiling bit-parallel mismatch counter for LongSequence{<:NucleicAcidAlphabet}
[ Info: Compiling bit-parallel match counter for LongSequence{<:NucleicAcidAlphabet}
[ Info: Compiling bit-parallel ambiguity counter for LongSequence{<:NucleicAcidAlphabet}
[ Info: Compiling bit-parallel certainty counter for LongSequence{<:NucleicAcidAlphabet}
[ Info: Compiling bit-parallel gap counter for LongSequence{<:NucleicAcidAlphabet}

I’ve seen mentions of a possibility to precompile using Pkg, but found no mention of such a thing in 12. API Reference · Pkg.jl. And indeed, Pkg.compile() is not a valid command. It looks like precompile is only a REPL command.

Moreover, I’m not even sure this would make the precompiled stuff visible to the final user.
Is is possible?

1 Like

I changed my approach: I decided to try PackageCompiler, with the following definition file:

Bootstrap:docker
From:julia:1.2-buster

%post
    apt-get update -y
    apt-get install -y git
    apt-get install -y wget
    apt-get install -y build-essential
    mkdir -p /usr/local/src/git-lfs
    cd /usr/local/src/git-lfs
    wget https://github.com/git-lfs/git-lfs/releases/download/v2.8.0/git-lfs-linux-amd64-v2.8.0.tar.gz
    tar -xvzf git-lfs-linux-amd64-v2.8.0.tar.gz
    ./install.sh
    cd ..
    git clone https://gitlab.pasteur.fr/bli/qaf_demux.git
    cd qaf_demux/Julia/QafDemux/
    /usr/local/julia/bin/julia --project=. --eval 'import Pkg; Pkg.Registry.add(Pkg.RegistrySpec(; url="https://github.com/JuliaRegistries/General.git")); Pkg.Registry.add(Pkg.RegistrySpec(; url="https://github.com/BioJulia/BioJuliaRegistry.git")); Pkg.activate("."); Pkg.instantiate(); Pkg.build()'
    strip deps/builddir/qaf_demux
    # TODO: install just the bin and needed libs in a standard location
    export PATH="/usr/local/julia/bin:${PATH}"
    export PATH=/usr/local/src/qaf_demux/Julia/QafDemux/bin:"${PATH}"
    export PATH=/usr/local/src/qaf_demux/Julia/QafDemux/deps/builddir:"${PATH}"
    # Test and trigger pre-compiling?
    # qaf_demux.sh --help > /dev/null
    which qaf_demux
    qaf_demux --help
    # TODO: include tests with test data
    apt-get remove -y git
    apt-get autoremove -y
    apt-get clean -y
    rm -rf /usr/local/src/git-lfs

%environment
    export LC_ALL=C
    export PATH=/usr/local/src/qaf_demux/Julia/QafDemux/deps/builddir:"${PATH}"

%runscript
    exec /usr/local/src/qaf_demux/Julia/QafDemux/deps/builddir/qaf_demux "$@"

It doesn’t work with the julia:1.3-buster docker image, apparently due to some errors at compilation with llvm.

PackageCompiler is used in the Pkg.build() step, via the following build script:

$ cat deps/build.jl
import Pkg
println("Building qaf_demux")
Pkg.add("ArgParse")
Pkg.add("IterTools")
Pkg.add("FASTX")
Pkg.add("CodecZlib")
Pkg.add("REPL")
Pkg.add("PackageCompiler")
using PackageCompiler
build_executable(joinpath(@__DIR__, "../bin/qaf_demux.jl"), "qaf_demux")

I now have something that can run on our centos based computing cluster, where debian buster based containers do not work. The following docker container can be used, however: https://github.com/docker-library/julia/blob/master/1.2/stretch/Dockerfile

Since it was not yet available on docker hub, I had to build it locally, push it on a local registry, and use that local registry in my singularity definition file. The following instructions were useful in this regard: https://github.com/sylabs/singularity/issues/1537#issuecomment-388642244

The singularity definition file and is there: Files ¡ master ¡ Blaise LI / qaf_demux ¡ GitLab (as of commit 10557266)

And I also had to use cpu_target="x86_64" in the compilation options in the deps/build.jl file:

build_executable(joinpath(@__DIR__, "../bin/qaf_demux.jl"), "qaf_demux", cpu_target="x86_64")

Otherwise, the container would not run on the computing cluster.

Now I face permission issues at run time.

Just running the container with --help worked, but when I want do actually use the program, I encounter a new issue:

fatal: error thrown and no exception handler available.
ErrorException("error compiling Type: error compiling #GzipCompressorStream#2: error compiling #TranscodingStream#5: error compiling deflate_init!: could not load library "/root/.julia/packages/CodecZlib/5t9zO/deps/usr/lib/libz.so"
/root/.julia/packages/CodecZlib/5t9zO/deps/usr/lib/libz.so: cannot open shared object file: Permission denied")

I tried to change permissions the root folder during the %post phase of the container build:

chmod -R a+rX /root

This seems to have solved the issue, but other failures happen next.
To be continued…

I included a test on real data at the end of the %post phase, and I get the same error as when running the container:

fatal: error thrown and no exception handler available.
#<null>
rec_backtrace at /buildworker/worker/package_linux64/build/src/stackwalk.c:94
record_backtrace at /buildworker/worker/package_linux64/build/src/task.c:219 [inlined]
jl_throw at /buildworker/worker/package_linux64/build/src/task.c:429
check_channel_state at ./channels.jl:117 [inlined]
take_unbuffered at ./channels.jl:366
take! at ./channels.jl:344 [inlined]
iterate at ./channels.jl:410
iterate at ./channels.jl:409 [inlined]
#3 at /usr/local/src/qaf_demux/Julia/QafDemux/bin/qaf_demux.jl:104
make_record_writers at /usr/local/src/qaf_demux/Julia/QafDemux/src/QafDemux.jl:258
jl_apply_generic at /buildworker/worker/package_linux64/build/src/gf.c:2197
julia_main at /usr/local/src/qaf_demux/Julia/QafDemux/bin/qaf_demux.jl:101
julia_main at /usr/local/qaf_demux/qaf_demux.so (unknown line)
main at qaf_demux (unknown line)
__libc_start_main at /lib/x86_64-linux-gnu/libc.so.6 (unknown line)
_start at qaf_demux (unknown line)

A test using the non-compiled version does not fail, so it is a PackageCompiler-related issue.

2 Likes

Here are the Docker and Singularity images that we use on some of our clusters: PredictMD-docker - Docker and Singularity images for PredictMD

Hm, I am also struggling. My usecase is creating a simple singularity container with a preinstalled custom julia module - shouldn’t be do difficult, should it x) ?

You can add the pre-installed package during the %post section in a read-only location with something like this:

Bootstrap: docker
From: julia:1.3

%environment
    export JULIA_DEPOT_PATH=:/opt/julia

%post
    # Julia packages
    export JULIA_DEPOT_PATH=/opt/julia
    export PATH=/usr/local/julia/bin:$PATH

    julia -e 'using Pkg;pkg"add JSON"'

    # Permissions
    chmod -R 645 /opt/julia

Note the :/opt/julia part in the %environment section. This will add the custom location as a second depot path after the default one (Environment Variables ¡ The Julia Language).

2 Likes

I just encountered the same problems, and I believe I have found a workable solution.

First of all, to the command to do the procompilation during build is Base.compilecache, which is a bit particular in that it requires a package identifier as argument, so I found that e.g.

Base.compilecache(Base.identify_package("FileIO"))

works to precompile.

That being said, for optimal performance the precompilation should probably happen on the system where the container is running, not on the system the image was build on (think AMD vs Intel, register length, …)
So I found that setting the DEPOT_PATH as follows made it work

%environment
    export JULIA_DEPOT_PATH=$PWD/precompile:/opt/.julia
    export PATH=/opt/julia/bin:$PATH

%post
    export JULIA_DEPOT_PATH=/opt/.julia
    export PATH=/opt/julia/bin:$PATH

This causes all package added during build to go to /opt/julia, but the runtime environment variable is prepended with the folder precompile in the current working directory. This directory is writable, as it is a default bind path. , and will be used first because it was prepended. In addition the packages from build can still be found by julia.
Obviously, any other default or custom bind path would work in the same fashion.
This way, packages can be precompiled, but not added, because the core manifest ist still read-only.

I suppose the alternative would be to set up a persistant overlay, but given that this overlay seems to live in some separate bit of storage on the host I don’t see much of a difference. It might be superior if the code being run needs to change files all over.
However, given that it needs sudo access (for using a directory as overlay), or an extra depdency in dd for the ext3 storage I think its too complicated for me.

The possibly bad side effect is that files now clutter the host, but I think this is actually good, as it means that the necessary precompile files live with the image on the host for repeated runs, and I am probably going to .gitignore them if need be.

Edit:
I just found out that mixing these things is not a great idea. If you precompile one package, this also precompiles all its dependencies (obviously). However, if some other package uses some of the same dependencies, but has not been pre-compiled during build, it will fail during container run. The problem appears to be of permissions, as building happens as root, and as such non-root can’t access the precompilation cache.

2 Likes

We are now using SimpleContainerGenerator to generate our Docker images:

It automatically uses PackageCompilerX.jl to create a custom system image, which solves a lot of these precompilation-related problems.

It generates Docker images, but you can easily convert Docker images to Singularity images: Support for Docker and OCI — Singularity container 3.5 documentation

I think this solution has also solved all the permissions-related issues. I.e. there are no problems using the Julia packages inside the image as a non-root user.

3 Likes

This is very interesting, and potentially very useful. Using this, I am assuming the workflow for porting a computing task to a server would be something like

  1. Run SimpleContainerGenerator, ideally parsing the Project.toml
  2. Run Docker to build the image
  3. Run Singularity to wrap the Docker image
  4. Push to Singularity Library/ Hub
  5. Pull on server, run compute job

I would also mean that the personal machine needs to have both Docker and singularity installed, and the daemon running right?

I think the same concept would be applicable to generate pure Singularity files, and would really only require the effort to write the necessary script generation?

My final question is, at what point (if at all) do architecture optimizations happen? I do vaguely remember hearing somewhere that precompiling only lowers the code, and that there is another step after it.
How does that compare to PackageCompiler?

I also found, that the following makes everything work.

julia -e 'using Pkg; for pkg in collect(keys(Pkg.installed()))
        Base.compilecache(Base.identify_package(pkg))
    end'
chmod -R a+rX $JULIA_DEPOT_PATH
1 Like

Just an update this produces a warning now:

julia -e 'using Pkg; for pkg in collect(keys(Pkg.installed()))
>         Base.compilecache(Base.identify_package(pkg))
>     end'

┌ Warning: Pkg.installed() is deprecated
└ @ Pkg /buildworker/worker/package_linux64/build/usr/share/julia/stdlib/v1.5/Pkg/src/Pkg.jl:554

I was stucked with the same problem, and SebastianM-C’s solution was most helpful. I summarised my solution steps briefly again for others.

For my case, the environment allows access to the internet, and I can run the command of Pkg.instantiate() in the Singularity container.

The key to running the container within the Singularity is to avoid creating the system related files (/opt/julia) in image. Within the Singularity image, files other than mounted one are read-only. So, when creating the Singularity image, let installation files to go /opt/julia, and at the time of running code in the Singularity container, let newly created system files go within the mounted files, and reuse the existing resources in the image.

Therefore, first, create the Docker image and the Singularity image. For me, I simply created the Singularity image with the docker2Singularity. Then when you run the code in the Singularity, set JULIA_DEV_OPT=“/workdir/.Julia::/opt/julia/”. ‘workdir’ is my mounted point, and we need two colons to let the new be prepended with the default one. See Julia Environment, and Environment and Metadata in SingularityCE. This code becomes like singularity --env JULIA_DEPOT_PATH="/workdir/.julia::/opt/julia" image.sif /bin/sh main.sh where main.sh contains the Julia execution code.

1 Like