we all knew this would be coming more or less but Apple is jumping to ARM fully this time, which potentially means in < 5 years most macOS devices will be running ARM; according to the tier of support, ARMv8 is tier 1 does that mean everything pure-Julia would run out of the box and does not need special treatment from author’s side is that correct?
I personally don’t have an ARMv8 device so it would be great to hear from the community who use / work on ARM everyday about the experience (and the lack of).
Julia works well on AArch64, but there’ll likely be macOS specific changes needed to make everything work. We’re looking into getting one of the devkits.
Do we have Julia builds that run natively on the Surface Pro X, the Microsoft machine that uses their own custom ARM chip? Would be nice to get native Julia builds for that platform as well. VS Code is going to ship with a native build very soon, so if we had Julia as well, it might also make for a nice dev machine.
I really hope that they pull this off smoothly, demonstrating (again) that one can just switch CPU architectures with no major problems. That should open up a new era of competition, benefiting all consumers and invigorating the industry.
Hopefully the future is generally favorable to RISC CPUs, not necessarily exclusively (but of course including) ARM. Specifically, RISC-V is something to keep an eye on.
Those CPUs sound incredible. I’d love to get my hands on something like it for the desktop.
SVE2 (scalable vector extensions 2) features 32 vector registers (allowed to be 128-2048 bits; that super compute has 512-bit vectors), no penalty for unaligned loads, faster gathers than AVX512 when loaded elements belong to the same 128-bit segments, bit masks/predicates to mask operations…
I haven’t been able to play around with them at all, but it sounds even better for SIMD-lovers than AVX512.
I guess
using LLVM
features = split(unsafe_string(LLVM.API.LLVMGetHostCPUFeatures()), ',')
would be the best way to query an ARM CPU for features.
I am developing tools for GNSS Signal Processing on the Nvidia Jetson plattform. These devices usually have some kind of a Nvidia Tegra CPU (ARMv8).
I must say that most things work well right out of the package. The headaches begin when the code has been written to use performance enhancing expressions/macros. These usually take advantage of some low-level instructions that simply don’t exist on ARM or are defined in some other way. So get ready to see your terminal filled with errors
In the long term I am interested too see how modules and packages get translated into this new world of ARM and Intel/AMD coexistence.
This is interesting. I’m sure many package authors would be interested in fixing these, so please do file issues. Most developers do not use these architectures, so it’s very useful to see issues. May not be fixed immediately, but still important to file.
I’m not sure where I could / should post this, so I’m adding it here.
Reading through #36617, the current method for installing Julia on Arm (M1) Macs is to build Julia from source.
I made a Homebrew Formula, which builds Julia’s master branch from source and installs Julia through Homebrew as usual. Definitely not ideal, since brew upgrade will trigger a complete re-build every time (if there are new commits to master), but it might be useful for anyone like me who prefers to keep all packages installed with HomeBrew!