I’ve set up a Dockerfile that sets up a Julia (and Python) environment for a tutorial that should run on https://mybinder.org. The reason I’m using a Dockerfile instead of relying on Binder to set up everything based on the Project.toml/Manifest.toml is that the setup and precompilation is more than the Binder nodes can handle. Thus, I’m letting Github CI build the Docker image and then Binder uses that pre-built image
In the Docker image, I make sure activate the project (in the home folder) and instantiate it, so that everything should be precompiled. However, when I load the repo in Binder at Binder, it appears to forget about the precompilation: Open a new terminal inside Jupyterlab, type julia --project=., type ] instantiate, see that it starts to precompile 432 packages for the next half hour. What gives? There are precompilation files in ~/.julia/compiled, and there doesn’t seem to be a permissions problem. How or why does Julia decide that it needs to compile again? Any tips for getting around this?
My educated guess is different CPU target, you can set JULIA_CPU_TARGET to avoid this:
If you know what CPU Binder runs on, you can use a slightly higher baseline, rather than generic (but if you get it wrong, you’re still at the same place).
Since I can access both a shell and a Julia REPL on Binder: is there any way I can find out what it uses as a CPU? (what would be the most appropriate JULIA_CPU_TARGET)
Sys.CPU_NAME, although that’s not perfect because that name comes from LLVM but the target parsing is done separately inside julia, there may be divergences. Also, consider the possibility that Binder may use a pool of different CPUs, so that getting the ISA of one CPU doesn’t necessarily mean you’ll get something that will work everywhere (this is for example the case on GitHub Actions). If you don’t know for sure what CPUs Binder runs on, generic may still be a safer, although slower, option.