How to build Julia to use vector instructions for multiple workers: "Target architecture mismatch"

If I set in my Make.user
MARCH=x86-64 I can get Julia to launch on multiple workers without simd vectorization working
If I remove that line or set it to MARCH=native I can get simd vectorization on the main worker but trying to launch additional workers gives

$ julia -p40
ERROR: Target architecture mismatch. Please delete or regenerate sys.{so,dll,dylib}.
ERROR: Target architecture mismatch. Please delete or regenerate sys.{so,dll,dylib}.
ERROR: Unable to read host:port string from worker. Launch command exited with error?
read_worker_host_port(::Pipe) at ./distributed/cluster.jl:236
connect(::Base.Distributed.LocalManager, ::Int64, ::WorkerConfig) at ./distributed/managers.jl:391
create_worker(::Base.Distributed.LocalManager, ::WorkerConfig) at ./distributed/cluster.jl:443
setup_launched_worker(::Base.Distributed.LocalManager, ::WorkerConfig, ::Array{Int64,1}) at ./distributed/cluster.jl:389
(::Base.Distributed.##33#36{Base.Distributed.LocalManager,WorkerConfig,Array{Int64,1}})() at ./task.jl:335

...and 1 more exception(s).

Stacktrace:
 [1] sync_end() at ./task.jl:287
 [2] macro expansion at ./task.jl:303 [inlined]
 [3] #addprocs_locked#30(::Array{Any,1}, ::Function, ::Base.Distributed.LocalManager) at ./distributed/cluster.jl:344
 [4] #addprocs#29(::Array{Any,1}, ::Function, ::Base.Distributed.LocalManager) at ./distributed/cluster.jl:319
 [5] #addprocs#243(::Bool, ::Array{Any,1}, ::Function, ::Int32) at ./distributed/managers.jl:311
 [6] process_options(::Base.JLOptions) at ./client.jl:267
 [7] _start() at ./client.jl:371

Question: How can I build Julia to utilize vector instructions for all workers?

On our server we have two xeon chips

$ cat /proc/cpuinfo
processor       : 0
vendor_id       : GenuineIntel
cpu family      : 6
model           : 79
model name      : Intel(R) Xeon(R) CPU E5-2630 v4 @ 2.20GHz
stepping        : 1
microcode       : 0xb000010
cpu MHz         : 1200.289
cache size      : 25600 KB
physical id     : 0
siblings        : 20
core id         : 0
cpu cores       : 10
apicid          : 0
initial apicid  : 0
fpu             : yes
fpu_exception   : yes
cpuid level     : 20
wp              : yes
flags           : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc aperfmperf eagerfpu pni pclmulqdq dtes64 ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch epb intel_pt tpr_shadow vnmi flexpriority ept vpid fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm cqm rdseed adx smap xsaveopt cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts
bugs            :
bogomips        : 4399.75
clflush size    : 64
cache_alignment : 64
address sizes   : 46 bits physical, 48 bits virtual
power management:

If you can find a target that works on all machines (e.g. haswell) do that.

There’s CPUID_SPECIFIC_BINARY (or sth like that) that you can use though I’ve never use it (and I’m removing it).

For a complete solution, wait for https://github.com/JuliaLang/julia/pull/21849 (it’s getting there).

1 Like

Thanks I didn’t know I had to explicitly set MARCH=haswell, now things work. Looking forward to the PR as well.