Note also that another way to get the CPU info is
julia -e 'println(Sys.CPU_NAME)'
Note also that another way to get the CPU info is
julia -e 'println(Sys.CPU_NAME)'
I just wanted to update here that my first attempt
export JULIA_CPU_TARGET="generic"
Surprisingly worked.
This will work, but will likely lead to slow code as the compiled package code is not specialised for your system. The other options are expected to improve the performance
This summarizes the solution to my issue.
The culprit was indeed in @jishnub tip:
For me, I have an LSF cluster, and the appropriate way to find the CPU of the nodes is to run
lshosts -w
This shows that many CPUs are Intel_Skylake
, but does not reveal the full details of the CPU.
Setting in .bashrc
export JULIA_CPU_TARGET="generic;skylake,clone_all;icelake-server,clone_all"
solved the issue.
Let me also mention that for the LSF cluster, one can submit the job to run only on nodes with a specific CPU. I used when I run bsub
this option
-R select [model=Intel_Skylake]
Thanks, Everyone!
this doesnāt help for my case because both login and remote says:
$ julia -e 'println(Sys.CPU_NAME)'
znver2
Do you know why? I do not know why it does not help in your caseā¦
Why doesnāt Julia detect if the current CPU is compatible, before loading cached binaries?
Iām getting lazy, so I just use the same CPU_TARGET
setting as the official Julia binaries:
export JULIA_CPU_TARGET="generic;sandybridge,-xsaveopt,clone_all;haswell,-rdrnd,base(1)"
Iāll change it to something more specific only if I need to run a job consuming more than a few thousand CPU hoursā¦