LLVM ERROR: Cannot select: intrinsic %llvm.x86.aesni.aesenclast

LLVM ERROR: Cannot select: intrinsic %llvm.x86.aesni.aesenclast
Encountered with PackageCompiler.jl on Julia 1.12.1 , works on 1.11.7

Julia Version 1.12.1
Commit ba1e628ee49 (2025-10-17 13:02 UTC)
Build Info:
  Official https://julialang.org release
Platform Info:
  OS: Linux (x86_64-linux-gnu)
  CPU: 12 × Intel(R) Xeon(R) CPU @ 2.20GHz
  WORD_SIZE: 64
  LLVM: libLLVM-18.1.7 (ORCJIT, cascadelake)
  GC: Built with stock GC
Threads: 1 default, 1 interactive, 1 GC (on 12 virtual cores)
Environment:
  LD_LIBRARY_PATH = 

I know it’s a CPU compatibility thing. Is there like a compiler flag I can set?

Running into this issue as will, tried to compile on multiple combinations of CPU targets.

Even generic and default is failing.

Wonder if it works on Julia 1.12.2

It doesn’t tell us the exact CPU you have. Can you post the output of e.g. lscpu? Relevant would be the aes flag.

Are you running in a virtualized environment?

If your CPU is really cascadelake, then it physically supports aes-ni – unless somebody disabled that in config.

I’d try to first figure out whether aes-ni is actually disabled (-> fix you bios / hypervisor config!) or whether llvm fails to understand whether aes-ni is available or not.

I would guess that some virtualization config interferes with llvm detection of cpu features; so both of you affected people should post some info about your setup (are you virtualizing? What is the host hardware / OS? What is the guest hardware / OS? What is you virtualization framework? Config?).

Google for kvm aesni woes to see whether something helpful pops up. (are you running under KVM?)

Theoretically one probably should be able to run without aesni support (yadda yadda supported target).

But that’s stupid waste of electricity and I would not be surprised if the actual support has bit-rotten.

Yeah it’s a KVM issue on GCP. CPU supports AES. IDK how to fix. Why does this fail on 1.12.1 and not on 1.11.7?

Architecture:                x86_64
  CPU op-mode(s):            32-bit, 64-bit
  Address sizes:             46 bits physical, 48 bits virtual
  Byte Order:                Little Endian
CPU(s):                      8
  On-line CPU(s) list:       0-7
Vendor ID:                   GenuineIntel
  Model name:                Intel(R) Xeon(R) CPU @ 2.30GHz
    CPU family:              6
    Model:                   63
    Thread(s) per core:      2
    Core(s) per socket:      4
    Socket(s):               1
    Stepping:                0
    BogoMIPS:                4599.99
    Flags:                   fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pg
                             e mca cmov pat pse36 clflush mmx fxsr sse sse2 ss h
                             t syscall nx pdpe1gb rdtscp lm constant_tsc rep_goo
                             d nopl xtopology nonstop_tsc cpuid tsc_known_freq p
                             ni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2ap
                             ic movbe popcnt aes xsave avx f16c rdrand hyperviso
                             r lahf_lm abm ssbd ibrs ibpb stibp fsgsbase tsc_adj
                             ust bmi1 avx2 smep bmi2 erms invpcid xsaveopt arat 
                             md_clear arch_capabilities
Virtualization features:     
  Hypervisor vendor:         KVM
  Virtualization type:       full
Caches (sum of all):         
  L1d:                       128 KiB (4 instances)
  L1i:                       128 KiB (4 instances)
  L2:                        1 MiB (4 instances)
  L3:                        45 MiB (1 instance)
NUMA:                        
  NUMA node(s):              1
  NUMA node0 CPU(s):         0-7
Vulnerabilities:             
  Gather data sampling:      Not affected
  Indirect target selection: Vulnerable
  Itlb multihit:             Not affected
  L1tf:                      Mitigation; PTE Inversion
  Mds:                       Vulnerable; SMT Host state unknown
  Meltdown:                  Vulnerable
  Mmio stale data:           Vulnerable
  Reg file data sampling:    Not affected
  Retbleed:                  Vulnerable
  Spec rstack overflow:      Not affected
  Spec store bypass:         Vulnerable
  Spectre v1:                Vulnerable: __user pointer sanitization and usercop
                             y barriers only; no swapgs barriers
  Spectre v2:                Vulnerable; IBPB: disabled; STIBP: disabled; PBRSB-
                             eIBRS: Not affected; BHI: Vulnerable
  Srbds:                     Not affected
  Tsa:                       Not affected
  Tsx async abort:           Not affected

So you’re on GCP? Maybe this becomes reproducible.

According to the lscpu output you posted, aes-ni should work / be enabled. So that suggests that julia/llvm is confused about the available hardware.

Next thing I’d check is what package compilation triggers the issue. I expect one of the following cases:

  1. You’re compiling something that absolutely requires aes-ni, and llvm somehow believes that aes-ni is not available.
  2. LLVM is internally confused about the availability of aes-ni: When selecting implementations, it believes that aes-ni is available, and during machine-code generation, it believes that aes-ni is not available.
  3. The package uses multi-versioning, llvm believes that aesni is not available, and something in llvm became more trigger-happy in throwing compile-time errors (as opposed to just emitting the damn code that could cause an illegal instruction trap).

Once you found an easy reproducer (i.e. responsible piece of code + simple GCP reproduction) I’d say open an issue.

1 Like

Thanks! Will look into. Clarify - code works but using PackageCompiler.jl to compile that code into binary doesn’t.