The future is ARM's?

we all knew this would be coming more or less but Apple is jumping to ARM fully this time, which potentially means in < 5 years most macOS devices will be running ARM; according to the tier of support, ARMv8 is tier 1 does that mean everything pure-Julia would run out of the box and does not need special treatment from author’s side is that correct?

I personally don’t have an ARMv8 device so it would be great to hear from the community who use / work on ARM everyday about the experience (and the lack of).

1 Like

Julia works well on AArch64, but there’ll likely be macOS specific changes needed to make everything work. We’re looking into getting one of the devkits.


Do we have Julia builds that run natively on the Surface Pro X, the Microsoft machine that uses their own custom ARM chip? Would be nice to get native Julia builds for that platform as well. VS Code is going to ship with a native build very soon, so if we had Julia as well, it might also make for a nice dev machine.


I really hope that they pull this off smoothly, demonstrating (again) that one can just switch CPU architectures with no major problems. That should open up a new era of competition, benefiting all consumers and invigorating the industry.

Hopefully the future is generally favorable to RISC CPUs, not necessarily exclusively (but of course including) ARM. Specifically, RISC-V is something to keep an eye on.


Ive been using Julia on AWS Graviton processors for a while. I have not had any issues. Very happy.


ARM is top of the Top500. It would be awesome to run Julia on there…

it would be beautiful to run free software on a free OS that runs on top of an open source CPU


A kind answer from Prof Matsuoka.
We have a challenge here!

1 Like

Those CPUs sound incredible. I’d love to get my hands on something like it for the desktop.

SVE2 (scalable vector extensions 2) features 32 vector registers (allowed to be 128-2048 bits; that super compute has 512-bit vectors), no penalty for unaligned loads, faster gathers than AVX512 when loaded elements belong to the same 128-bit segments, bit masks/predicates to mask operations…

I haven’t been able to play around with them at all, but it sounds even better for SIMD-lovers than AVX512.

I guess

using LLVM
features = split(unsafe_string(LLVM.API.LLVMGetHostCPUFeatures()), ',')

would be the best way to query an ARM CPU for features.


Regarding querying for features try archspec


Sorry - your method is elegant and is native Julia.
I just wanted to point out this interesting package.

BTW, a64fx have SVE, not SVE2. The max vector length is 1024bit, though I don’t know what the hardware implements.


A 2U form factor on sale for $40K:


Ooooh… that does look interesting…

I am developing tools for GNSS Signal Processing on the Nvidia Jetson plattform. These devices usually have some kind of a Nvidia Tegra CPU (ARMv8).

I must say that most things work well right out of the package. The headaches begin when the code has been written to use performance enhancing expressions/macros. These usually take advantage of some low-level instructions that simply don’t exist on ARM or are defined in some other way. So get ready to see your terminal filled with errors :slight_smile:

In the long term I am interested too see how modules and packages get translated into this new world of ARM and Intel/AMD coexistence.


This is interesting. I’m sure many package authors would be interested in fixing these, so please do file issues. Most developers do not use these architectures, so it’s very useful to see issues. May not be fixed immediately, but still important to file.