When does Code get optimized for architecture

My question might be a simple one: At what point (if at all) does julia code get optimized for the specific architecture it is running on?

I dimly remember from some Juliacon talk that precompilation only lowers the code, but to actually run it another step is being taken, the result of which is then cached.

More specifically, can I theoretically move my /.julia/compile cache to another machine? Also, how does this differ from the PackageCompiler(X)?

In short, it’s loweringtype inferenceoptimizationLLVM IRnative code. The lion’s share of architecture-specailization is done by LLVM in the generation of native code; since the *.ji files store the type-inferred code, that suggests those files should be fairly portable. However, one important bit of optimization for architecture happens right away, in that Int means Int64 on a 64-bit machine and Int32 on a 32-bit machine. So you’ll definitely need to match the word size.

(EDIT: in contrast, PackageCompiler is probably not portable.)

Bottom line, I’ve never tried moving them. Tell us what happens!

3 Likes

My reply is not directly related to machine optimization, but I can confirm that a sysimage compiled with PackageCompilerX can be used on another machine on Windows(64bit)…with one caveat: I set the .julia folder Depot Path inside the Julia installation folder. This avoids the path issues due to different usernames (in my case I used the default environment).

I think it’s conceivable that some packages lookup some architecture-specific information in a macor or the top-level code. In fact, something like if Sys.iswindows() ... at top-level or @static is very common. Code doing something similer with (say) CpuId.jl may already exist in the wild.

1 Like

Good point, I hadn’t even thought about doing it between different OSs (which probably won’t work). And indeed a few packages won’t be portable regardless.

It’s also worth emphasizing that there may not be a lot of value in trying, since as soon as some widely-used dependency gets a version upgrade you’ll have to recompile everything anyway.

You only have to recompile of you actually update your packages. If you are fine with running the old code and that worked well you should be fine?

I agree that trying this would be valuable. I’m just mentioning that you’d need packages to behave nicely. I think it even makes sense to mention in the Julia documentation that the package authors should try to make the precompilation cache hardware independent so that Julia applications work well in something like singularity container.

Thank for the responses. I am indeed coming from the perspective of singularity containers, and specifically this discussion. By design these are read-only (at least the version that does not require sudo to run, as required on the clusters I have access to and surely many others).

This does create some problems with precompilation, as the default DEPOT becomes non-writable during runtine so precompilation fails.

As far as I can tell, there are 3 options around it with increasing front-loaded effort.

  1. Don’t precompile anything, and prepend the DEPOT_PATH with some host-bound writable directory. This adds the compile time to the first run of a given container on the new system, but the files remain on the new system.
  2. Precompile everything during image build
  3. PackageCompiler during image build

Given that in the aforementioned thread bcbi/SimpleContainerGenerator.jl is mentioned, which uses PackageCompiler for Docker it seems possible in general, but I suppose it could be that all machines run on reasonably recent intel processors so the hardware is essentially the same.

Assuming packages play nice, does normal Julia do anything between precompile and (package)compile that might cause problems on a hardware level (e.g. (pre)compile on personal AMD desktop and send to intel Xeon cluster)?

If packages don’t play nice, e.g. a package uses CpuID, or maybe downloads specic binaries during build, I suppose option 1 is the way to go, likely combined with a required rebuild.
(I should test if when rebuilding in a container the new build files get put into the prepended DEPOT_PATH).
I think @tkf raises a good point that this might be something that could be mentioned explicitly somewhere?

Beyond that, recompiling due to package upgrades is less important as the goal (at least for me) of using singularity is a reproducible environment, so making updating a package at least hard would be intentional.