Cannot add package in super computer

Hi all. I am new to HPC, so I need your help.
I encounted the following errors in super computer,
Unfortunately the super computer does not support julia but permits
manually installing. So I installed julia using juliaup.
Installation was successfully but cannot precompile any packages.

Does someone have similar problem?
Or should I report this to llvm-project?
Thank you in advance.

  Installing known registries into `~/.julia`
    Updating registry at `~/.julia/registries/General.toml`
   Resolving package versions...
   Installed Parsers ───────── v2.8.1
   Installed JSON ──────────── v0.21.4
   Installed BenchmarkTools ── v1.5.0
   Installed Preferences ───── v1.4.3
   Installed PrecompileTools ─ v1.2.1
  No Changes to `~/dev/test/Project.toml`
  No Changes to `~/dev/test/Manifest.toml`
Precompiling project...
terminate called after throwing an instance of 'std::system_error'
  what():  Resource temporarily unavailable
PLEASE submit a bug report to https://github.com/llvm/llvm-project/issues/ and include the crash backtrace.
lld: error: failed to write to the output file: No such file or directory
 #0 0x00007fa7f4eaa44f PrintStackTraceSignalHandler(void*) Signals.cpp:0:0
 #1 0x00007fa7f4ea7dfc SignalHandler(int) Signals.cpp:0:0
 #2 0x00007fa7f3e54db0 __restore_rt (/lib64/libc.so.6+0x54db0)
 #3 0x00007fa7f3ea154c __pthread_kill_implementation (/lib64/libc.so.6+0xa154c)
 #4 0x00007fa7f3e54d06 gsignal (/lib64/libc.so.6+0x54d06)
 #5 0x00007fa7f3e287f3 abort (/lib64/libc.so.6+0x287f3)
 #6 0x00007fa7f42619bb __gnu_cxx::__verbose_terminate_handler() (.cold) /workspace/srcdir/gcc-13.2.0/libstdc++-v3/libsupc++/vterminate.cc:75:10
 #7 0x00007fa7f427136a __cxxabiv1::__terminate(void (*)()) /workspace/srcdir/gcc-13.2.0/libstdc++-v3/libsupc++/eh_terminate.cc:48:15
 #8 0x00007fa7f42713d5 (/home/6/username/.julia/juliaup/julia-1.10.4+0.x64.linux.gnu/lib/julia/libstdc++.so.6+0xb83d5)
 #9 0x00007fa7f4271627 (/home/6/username/.julia/juliaup/julia-1.10.4+0.x64.linux.gnu/lib/julia/libstdc++.so.6+0xb8627)
#10 0x00007fa7f4264627 std::__throw_system_error(int) /workspace/srcdir/gcc-13.2.0/libstdc++-v3/src/c++11/system_error.cc:533:5
#11 0x00007fa7f429deb5 (/home/6/username/.julia/juliaup/julia-1.10.4+0.x64.linux.gnu/lib/julia/libstdc++.so.6+0xe4eb5)
#12 0x00007fa7f4df3af3 std::thread::_State_impl<std::thread::_Invoker<std::tuple<llvm::parallel::detail::(anonymous namespace)::ThreadPoolExecutor::ThreadPoolExecutor(llvm::ThreadPoolStrategy)::'lambda'()>>>::_M_run() Parallel.cpp:0:0
#13 0x00007fa7f429db23 std::default_delete<std::thread::_State>::operator()(std::thread::_State*) const /workspace/srcdir/gcc_build/x86_64-linux-gnu/libstdc++-v3/include/bits/unique_ptr.h:99:2
#14 0x00007fa7f429db23 std::unique_ptr<std::thread::_State, std::default_delete<std::thread::_State>>::~unique_ptr() /workspace/srcdir/gcc_build/x86_64-linux-gnu/libstdc++-v3/include/bits/unique_ptr.h:404:17
#15 0x00007fa7f429db23 execute_native_thread_routine /workspace/srcdir/gcc-13.2.0/libstdc++-v3/src/c++11/thread.cc:106:5
#16 0x00007fa7f3e9f802 start_thread (/lib64/libc.so.6+0x9f802)
#17 0x00007fa7f3e3f450 __GI___clone3 (/lib64/libc.so.6+0x3f450)
terminate called after throwing an instance of 'std::system_error'
  what():  Resource temporarily unavailable
PLEASE submit a bug report to https://github.com/llvm/llvm-project/issues/ and include the crash backtrace.
terminate called recursively
  ◒ CompilerSupportLibraries_jll
  ◐ Preferences

1 Like

Did you simply to precompile the project again (using ]precompile)? To me the error looks like it could have been a filesystem issue and I know that these HPC distributed file systems are a bit fragile at times.

The login nodes on some clusters have very tight resource limits, and I need to request an interactive session from the queue system to even precompile a package.

Thank you for replying.

  • I tried [precompile but failed. Perhaps filesystem is Lustre?.
  • I can precompile when I request computation node.

I may continue to ask rudimentary questions in the future.
Thanks for your kind response.

1 Like

You don’t have to necessarily go to a compute node, you can also set the environment variable JULIA_NUM_PRECOMPILE_TASKS to a low value (like 1, 2, 3, or so) such that it doesn’t trigger whatever restriction is in place. BTW, this was mentioned also in Julia precompiling error on cluster · Issue #68192 · llvm/llvm-project · GitHub, Crash When Attempting to Install Packages on HPC Environment · Issue #52220 · JuliaLang/julia · GitHub, and loads of duplicate issues

4 Likes

I will add this information to the FAQ section of https://juliahpc.github.io.

2 Likes

Thanks, improving documentation is probably the easiest thing to do in the short term. I wish we could more proactively check what are the restrictions in place and adapt the number of parallel precompilation jobs, but the problem is that it may be complicated to programmatically detect what are the restrictions, since different systems may be used. Also, lld crashes with a very confusing error message.

1 Like

I wish juliaup can accept settings of default environment variables, stored in some config file, for launching Julia, which would be useful in this kind of situations.

Sounds like you may want something like direnv.

1 Like

I don’t think in this specific case you want to do this per directory, but rather per node: reducing the number of precompilation tasks is needed only on login nodes, not computing ones.

1 Like

Just set the environment variable in your bashrc with an if to check if you’re on a login node.

3 Likes

Yeah, a config file for juliaup wouldn’t actually help here since it wouldn’t be able to check if you’re on a login node or no. So then the next feature request would be running code in the config file… and voila, you have accidentally created a (bad) programming language in your config files. You can use shell scripting here for anything programmatic and tools like direnv if you want to change behavior on a per-project basis.

@giordano Thank you for more accurate information. That method worked. Sorry for the delay in replying, and for the lack of prior research.