To elaborate, this is the source code of the C function used to stat
files:
It’s basically using libuv’s uv_fs_stat
.
To elaborate, this is the source code of the C function used to stat
files:
It’s basically using libuv’s uv_fs_stat
.
haha indeed
Looking into the virtualisation I could figure it out. So in order to enable users access to different software from different machines they setup modulefiles through which the environment can be changed on-the-fly e.g. add an entry to PATH
. Thus users don’t have to setup software on their own and can switch between versions.
In my case module load julia
added symlink named julia
to the PATH
variable. That symlink points towards a singularity wrapper that runs a singularity container in which julia is installed. Of course within the container the “new” driver library cannot be found if not bound/mounted. So adding
export SINGULARITY_BIND="${SINGULARITY_BIND},/lib/x86_64-linux-gnu/libcuda.so.535.161.08:/.singularity.d/libs/libcuda.so"
to my .bashrc
mounts the driver, enables julia to find libcuda.so
within the container. Consequenty using Base.Libc.Libdl; dlopen("libcuda.so")
works, and also CUDA.functional()
returns true
.
In Hindsight I should have looked into that sooner.
Thank both of you again for your time and advice!
When you were running the C programs to try and reproduce the issue you were outside of the Singularity container, which explains why they were working fine?
exactly! Only when using julia the container is run and julia is executed from within.