The future is ARM's?

we all knew this would be coming more or less but Apple is jumping to ARM fully this time, which potentially means in < 5 years most macOS devices will be running ARM; according to the tier of support, ARMv8 is tier 1 does that mean everything pure-Julia would run out of the box and does not need special treatment from author’s side is that correct?

I personally don’t have an ARMv8 device so it would be great to hear from the community who use / work on ARM everyday about the experience (and the lack of).

3 Likes

Julia works well on AArch64, but there’ll likely be macOS specific changes needed to make everything work. We’re looking into getting one of the devkits.

19 Likes

Do we have Julia builds that run natively on the Surface Pro X, the Microsoft machine that uses their own custom ARM chip? Would be nice to get native Julia builds for that platform as well. VS Code is going to ship with a native build very soon, so if we had Julia as well, it might also make for a nice dev machine.

8 Likes

I really hope that they pull this off smoothly, demonstrating (again) that one can just switch CPU architectures with no major problems. That should open up a new era of competition, benefiting all consumers and invigorating the industry.

Hopefully the future is generally favorable to RISC CPUs, not necessarily exclusively (but of course including) ARM. Specifically, RISC-V is something to keep an eye on.

9 Likes

Ive been using Julia on AWS Graviton processors for a while. I have not had any issues. Very happy.

8 Likes

ARM is top of the Top500. It would be awesome to run Julia on there…

it would be beautiful to run free software on a free OS that runs on top of an open source CPU

7 Likes

A kind answer from Prof Matsuoka.
We have a challenge here!

2 Likes

Those CPUs sound incredible. I’d love to get my hands on something like it for the desktop.

SVE2 (scalable vector extensions 2) features 32 vector registers (allowed to be 128-2048 bits; that super compute has 512-bit vectors), no penalty for unaligned loads, faster gathers than AVX512 when loaded elements belong to the same 128-bit segments, bit masks/predicates to mask operations…

I haven’t been able to play around with them at all, but it sounds even better for SIMD-lovers than AVX512.

I guess

using LLVM
features = split(unsafe_string(LLVM.API.LLVMGetHostCPUFeatures()), ',')

would be the best way to query an ARM CPU for features.

5 Likes

Regarding querying for features try archspec

https://github.com/archspec/archspec

4 Likes

Sorry - your method is elegant and is native Julia.
I just wanted to point out this interesting package.

BTW, a64fx have SVE, not SVE2. The max vector length is 1024bit, though I don’t know what the hardware implements.

2 Likes

A 2U form factor on sale for $40K:

2 Likes

Ooooh… that does look interesting…

I am developing tools for GNSS Signal Processing on the Nvidia Jetson plattform. These devices usually have some kind of a Nvidia Tegra CPU (ARMv8).

I must say that most things work well right out of the package. The headaches begin when the code has been written to use performance enhancing expressions/macros. These usually take advantage of some low-level instructions that simply don’t exist on ARM or are defined in some other way. So get ready to see your terminal filled with errors :slight_smile:

In the long term I am interested too see how modules and packages get translated into this new world of ARM and Intel/AMD coexistence.

4 Likes

This is interesting. I’m sure many package authors would be interested in fixing these, so please do file issues. Most developers do not use these architectures, so it’s very useful to see issues. May not be fixed immediately, but still important to file.

6 Likes

Cool. So does Julia now work on the newly released Apple laptops?

1 Like

Works through Rosetta but not natively just yet.

https://github.com/JuliaLang/julia/issues/36617

7 Likes

I’m not sure where I could / should post this, so I’m adding it here.

Reading through #36617, the current method for installing Julia on Arm (M1) Macs is to build Julia from source.

I made a Homebrew Formula, which builds Julia’s master branch from source and installs Julia through Homebrew as usual. Definitely not ideal, since brew upgrade will trigger a complete re-build every time (if there are new commits to master), but it might be useful for anyone like me who prefers to keep all packages installed with HomeBrew! :smile:

The Formula, and instructions for using it, is available here! https://github.com/cadojo/homebrew-julia-master

2 Likes

Seems like you could get around some of those issues by pointing to a release branch instead of master branch, if you wanted.

1 Like