Seemingly unnecessary Precompilation with Julia 1.9 in a Docker container

I have a Julia web server running within a Docker container. Everything used to run fine with Julia 1.8.3. I’m trying to upgrade to Julia 1.9, and now it seems the server takes too much time to start when I deploy it. It runs fine locally, the server starts quickly. Looking at the remote log messages, it seems that Julia is precompiling my package, what might explain the long startup time. I don’t see any precompilation running the container locally, though.

What could be triggering precompilation again when this container runs remotely?


Maybe if it is not the same hardware architecture on the server?

Great point, thanks! Well they are all x86, but my local machine Sys.CPU_NAME is tigerlake and the remote is a slightly older skylake familiy chip… I suppose this could be enough to cause a conflict?

Maybe, I don’t really know much about this. But since they cache lower level stuff in precompilation for 1.9 (iiuc) it seems like it could maybe be affected.

If you run Julia with JULIA_DEBUG=loading in the docker container you will get logs for why the caches were rejected.

If it is indeed due to architecture (it could be due to other things such as bounds checking being on vs. off etc.), you can do the following:
In the Dockerfile before installing/precompiling your package, try to insert a line

ENV JULIA_CPU_TARGET=x86_64;haswell;skylake;skylake-avx512;tigerlake

that should cover most architectures that you may encounter, including yours.
I’d be interested to hear back whether this works out.


Thanks for the info, I tried that but still got the pre-compilation message and the timeout as before. I’m going to try looking at the debug logs now.

The log shows lots of messages like

┌ Debug: Rejecting cache file /home/app/.julia/compiled/v1.9/MyPackageXxx/iqN7A_LVvhN.ji for MyPackageXxx [7-09790-790-7-9-789-] since pkgimage can't be loaded on this target"
└ @ Base loading.jl:2706

I was now running without the JULIA_CPU_TARGET, will put that back on and see if it’s the same thing.

I have now tried simply JULIA_CPU_TARGET=generic and still got the same thing… Are there other simple things I could try just setting up flags and variables like that?

I also saw seemingly unnecessary precompilations with Julia 1.9 as compared to 1.8.

1 Like

Weird, that helped me in this case `compilecache` failed when `@everywhere using` from remote machines · Issue #48217 · JuliaLang/julia · GitHub

I wasn’t able to find a reference for JULIA_CPU_TARGET other than this issue: JULIA_CPU_TARGET flag · Issue #8198 · JuliaLang/julia · GitHub

Are the valid values for JULIA_CPU_TARGET still whatever the valid architectures are for gcc --march? Is there a more Julia-focused documentation of the precompilation-relevant flags?

E- looked further into the manual, found it here: System Image Building · The Julia Language

I was also having issues with system image building :see_no_evil:

The flags are not specific to precompilation i.e. package images, and indeed I‘m not sure this (precompilation to targets other than native) is documented.

However, the possible flags can be found with julia —cpu-target=help.

1 Like

Bummer. I don‘t know for sure. I discussed this with @Sukera on Slack once, maybe he has other suggestions.

Thanks for pointing that out. I have tried generic and still got the same issue. Could this be a bug? I’m not sure how can I narrow down what causes the cache invalidation.

I opened an issue in Julia about this with a minimal reproducible example. This also includes a minimal example for this related issue that I mentioned before.


Thanks @nrontsis! So setting JULIA_CPU_TARGET does seem the be the workaround, the issue I had was having to set it up earlier in the Dockerfile, so that it affects every call to Julia. See Cache invalidations in precompiled code · Issue #50102 · JuliaLang/julia · GitHub

1 Like