When does Code get optimized for architecture

Thank for the responses. I am indeed coming from the perspective of singularity containers, and specifically this discussion. By design these are read-only (at least the version that does not require sudo to run, as required on the clusters I have access to and surely many others).

This does create some problems with precompilation, as the default DEPOT becomes non-writable during runtine so precompilation fails.

As far as I can tell, there are 3 options around it with increasing front-loaded effort.

  1. Don’t precompile anything, and prepend the DEPOT_PATH with some host-bound writable directory. This adds the compile time to the first run of a given container on the new system, but the files remain on the new system.
  2. Precompile everything during image build
  3. PackageCompiler during image build

Given that in the aforementioned thread bcbi/SimpleContainerGenerator.jl is mentioned, which uses PackageCompiler for Docker it seems possible in general, but I suppose it could be that all machines run on reasonably recent intel processors so the hardware is essentially the same.

Assuming packages play nice, does normal Julia do anything between precompile and (package)compile that might cause problems on a hardware level (e.g. (pre)compile on personal AMD desktop and send to intel Xeon cluster)?

If packages don’t play nice, e.g. a package uses CpuID, or maybe downloads specic binaries during build, I suppose option 1 is the way to go, likely combined with a required rebuild.
(I should test if when rebuilding in a container the new build files get put into the prepended DEPOT_PATH).
I think @tkf raises a good point that this might be something that could be mentioned explicitly somewhere?

Beyond that, recompiling due to package upgrades is less important as the goal (at least for me) of using singularity is a reproducible environment, so making updating a package at least hard would be intentional.