Mac's AMD GPU

It should be tested, now with the T2 chip, and weird UEFI software I am not that sure anymore that they are normal machines. The chip is X86, but the communication between chip and other parts of the system could have unexpected behaviours https://www.apple.com/euro/mac/shared/docs/Apple_T2_Security_Chip_Overview.pdf

For example, as far as I know, you have to change (under OSX) the secure boot option to be able to install linux now.

1 Like

I think Apple should contribute to making their really brilliant dual vega graphics card work well with Julia and the community in general that want to use rDNA and rDNA2 architecture, it is what ORNL chose for the Frontier exa-scale supercomputer after all. HSA with the HBCC and using all the architectural groovyness would be good.

I don’t think Apple actually cares that much about HPC and general GPU compute; if they did, they wouldn’t have severed ties with Nvidia, removed OpenCL support, and not implemented Vulkan. Unlike Linux and Windows, they have non-existent server marketshare, so developers and vendors have few incentives to support them in the GPGPU ecosystem. So I don’t see any reason why they would suddenly start caring and supporting these important cross-platform APIs instead mandating proprietary, Apple-only APIs.

2 Likes

I wouldn’t count on that as not only OpenGL, but also OpenCL are not their future:

1 Like

FWIW I’ve had two Dell Ubuntu Laptops and they’ve been fantastic. Great graphics and having a nvidia GPU on board makes development so easy. I love macs too but…I have work to do.

2 Likes

Hi, right now I’m just toying with Julia, I’m a Mac user and also very engaged with the New Mac pro, but, until there’s either native support in Julia for Metal performance shaders or opencl it’s reinstalled or sycl/vulkan gets released in macOS with Metal support ( I mean codeplay is capable to do that) there’s even no chance to do Data science in Julia/macOS.

BTW, I love the new Mac Pro, but for 56k$ you can buy an iMac Pro an MacBook pro and build a compute Server with at 8 nVidia rtx2080ti/mi60 same RAM and storage, and even may you have some spare money too, with new tools as vscode remote, jupyter an no machine nx or impi/ikvm for remote desktop you even don’t need to leave a Mac.
Personally I plan to buy later an iMac/IMac Pro (asap the all new model is released) + iPad Pro 12 + diy number cruncher Server based on AMD Threadripper and 4 nVidia GPUs.

Consider an optimal GPU Server requires 2-4 cores per GPU and as much ram as the beefier group of fabric linked GPUs (nvlink/infinity fabric) , for 4 rtx Titan with dual nvlink you need 8-16 cores plus 48gb RAM, low end Threadripper 3070 has 24 cores and most MB allow 128gb which technically could rise to 1tb ram with an bios update. All this cost less than 20k and provide upto twice performance than all 4 Vega inside the Mac Pro.

I believe later sycl will be supported in macOS/metal thru apple or 3rd party but dog computational work with Macs has no economical Sense neither productivity justification.

An iMac Pro with HCC CPU meanwhile it’s good enough for coding and algorithm deputation, but running heavy computational work better in a Ubuntu Server with cuda or AMD gpu s.

That’s a rather strong statement. While GPUs are useful for some kind of computations, they are lots of things that are considered “data science” (itself a rather vague term) that can be done on a CPU.

3 Likes

Hi all, I now have an investment decision to do too, so I would be extremely happy to know if there are any important updates regarding these topics?

If I understood the discussion here correctly, there were open questions as

  • will Apple change their policy towards established GPU systems such as opencl?
  • will there be native or Metal support in Julia for AMD on iMacs?
  • can one install and use a Linux system for GPU computing on an iMac (concerning the T2 chip etc.)?

Any new info would be great to hear (or help in general as I might have misunderstood some parts here)!

Macs use AMD - currently (but with them adopting ARM CPU chips, I wouldn’t bet on that for long, I understand they’ll use their own GPU).

https://juliagpu.gitlab.io/AMDGPU.jl/

supported by most modern AMD GPUs (detailed hardware support) and some AMD APUs. ROCm works solely on Linux and no plans to support either Windows or macOS have been announced by AMD.

My understanding is ROCm is AMD’s main API for compute (and HIP, a CUDA clone, the Julia package uses libhip), see here about AMD’s (non)support for Apple’s MacOS (it seems Apple’s and/or AMD’s problem to support, then Julia will follow):

I don’t know too much about Metal, except it does support compute too through shaders (I’m just not sure if it’s a good substitute for CUDA or ROCm or HIP). I believe every other API, e.g. OpenCL is deprecated on Apple.

I’m pretty sure this page said “experimental” at some point, but not now, while CUDA is for sure better supported, not just by Julia (and has tier 1 support in Julia, that package/ecosystem, while for AMD currently tier 3):
https://juliagpu.org/rocm/

This doesn’t seem like a Julia problem, more like an Apple problem, and I’m not sure better support (some/most) on other languages:

I just noticed today (non-new, I think) Google’s Dawn software to abstract over Metal, Vulkan and DX12, but I’m not sure if it’s only for graphics. There’s also other abstracting software, this or other might be useful for Julia.

Multiple SYCL implementation support MacOS today, although they only support CPU devices.GPU device support is not available because Apple does not support the back-end dependencies available on Linux or Windows.

Julia has Vulkan support, through a package.

You can always abandon the Mac, I mean MacOS (could still use the hardware, and run Linux and/or Windows on it for GPU support). :slight_smile:

2 Likes

thanks a lot for your feedback. If I understand correctly, support for AMD GPGPU computing on macOS looks a bit disappointing, there is currently no real option for it and, maybe more importantly, unlikely to be one in the future? Considering that Apple goes for its ARM-based GPUs and drop AMD GPUs, neither Apple nor AMD will have much motivation to come up with support for current AMD-based iMacs in the future I guess.

ok, fair :smiley:

I wouldn’t worry too much about AMD GPUs gone (potentially) from future Apple Macs.

could not only debut with the new ARMv9 instruction set architecture, or ISA, but on Taiwan Semiconductors new 5 nanometer process as well. And that’ll give Apple not just a ton of silicon budget to work with, but a couple of options for this particular member of the system-on-a-chip family.

I just saw ARMv9 now first, but ARMv8, some sub-version of it added Fujitsu’s SVE (i.e. now ARM standard), and ARMv8 will have better SVE2. The older version is what got the new supercomputer to gain top spot and almost 3x faster than previous fastest, fastest on four benchmarks (no computer achieved those before at the same time), including AI benchmark, all without GPUs.

“This is an impressive machine,” Dongarra told us. “[…] It’s a very well balanced machine. It was designed to do supercomputing – that is to say, it wasn’t cobbled together from commodity processors and GPUs. […]"

Of course the ARM-based Macs will not have millions of ARM cores, or even a 1000, so they might still need a GPU (and better than what’s in the iPad, but it will likely be scaled up).

The only ARM-based Mac out so far (a development machine, based on iPad hardware), with:

The chip has an 8-core GPU, one more core than its predecessor, the Apple A12X

I can’t say for sure about A12Z (or future ARM Macs), but A12X:

includes dedicated neural network hardware that Apple calls a “Next-generation Neural Engine”.[4] This neural network hardware, which is the same as found in the A12,[1] can perform up to 5 trillion operations per second.[4]

My hope is that AMD/Apple will eventually collaborate on implementing a MacOS kernel module for AMD GPUs that ROCm can build on, but it’s my belief that this won’t happen anytime soon. And by the time that could be completed, Apple might have decided to move onto their own silicon and ditch AMD GPUs.

Regarding Metal support, there exists an implementation in Julia called MetalCore.jl that is part of the way to working, but its creator doesn’t have the bandwidth to finish it right now. If someone wants to pickup the torch, then it might be possible to have GPGPU support for MacOS with a decent amount of work.

Still, it’s my recommendation that if you really care about good Julia GPGPU support, you should buy a Linux machine, whether you go AMD or NVidia. Apple has deprecated its OS into a rut, and I’m not convinced that they’ll come out of it anytime soon. But I can tell you that the Linux AMDGPU support will continue to grow and improve to match what we have with CUDA over the next few years, and AMD will keep pushing to improve their end of the stack as well.

4 Likes

I am not sure they are very interested in numerical applications. Compared to their main market, that’s either too low in volume or markup (or value added, depending on one’s perspective).

a bit off-topic: compared to doing computation in GPU, I found that using purely multithreaded CPU + RAM could be very efficient when I manage the memory allocation carefully.

That said, memory allocation is a big big cost, so try very hard to use/reuse pre-allocated memory as possible. After rewriting almost all functions in form of f!(), I found that there’s not much need to use GPU at all.

4 Likes

I’m a long standing iOS & Mac developer but I do work in the ML space video compression spaces as well.
Apple are primarily focusing on their own Metal framework which support both training models and importing them from other platforms and tool chains. It’s worth looking at the most recent what’s new in Core ML, etc videos on the Apple developer portal. I’ve been following the CUDA / OpenCL, NVIDIA / AMD and associated graphic API sagas for at least ten years and SFAIKS there is no easy answer. But again IMHO it boils down to this as of mid 2020.
If you absolutely need CUDA for say Tensor flow then go NVIDIA and Intel but tread careful around the actual hardware vendor as even here you can get caught by driver compatibility issues. Operating system wise some flavour of Linux is prolly best choice.
If you can afford a access a cloud solution use that gain experience using free tiers as much as possible.
If you are more confident that you can wrangle more complex tool chains then AMD CPU / GPU combos are viable under Linux via ROCm but there may be some compatibility issues.
If you have a shiny new Mac or even shiny recent Mac then look seriously at what Metal has to offer and accept that you’ll have to do large novel model training and development in the cloud.
There’s a final proviso: the FANGs etc are throwing enormous resources at ML training, in the order of thousands of GPUs for several weeks in the case of some models have been quoted. So whilst it’s viable for an individual or small group to do good work I don’t think it’s easy to be competitive with these big players.

1 Like

I think you mean you need a non-Apple computer, but actually you only need a non-default GPU. Maybe you can replace or only add and have Nvidia and AMD.

HIP for AMD is a CUDA clone, but not supported on Apple. Who knows later, and also Nvidia GPUs could be in Apple’s computers future, I have no idea if they will drop AMD’s in Nvidia’s favour.

But to be clear I looked up and CUDA is supported on MacOS (by Nvidia, not Apple, not sure that matters):
https://docs.nvidia.com/cuda/cuda-installation-guide-mac-os-x/index.html
Nvidia’s docs are updated in August 2020, but do not read to much into that, as still not listing new MacOS versions (I do not know if that’s the minimum or maximum):

The CUDA Development Tools require an Intel-based Mac running Mac OSX v. 10.13.

You could use Nvidia, at least recently on even Mac laptops, e.g. eGPUs (and presumably regular Nvidia GPUs in desktop Macs, too?).

EDIT: 2019 article here:

Apple just doesn’t allow modern Nvidia GPUs on macOS Mojave, and this is a dramatic change from only six months ago. Given that a new Mac Pro is coming that could support Nvidia cards, and there are already eGPUs that should, it’s time that Apple did.

As with anything Apple, there’s a long history between the two companies. And, some bad blood.

2018 article (not sure still possible):
https://9to5mac.com/2018/05/05/nvidia-egpu-thunderbolt-macos-script-video/

Enabling NVIDIA eGPUs on macOS

With macOS 10.13.4, Apple officially supported eGPUs for the first time, yet there was one glaring omission with the release — a lack of NVIDIA support. […]

In my hands-on walkthrough below, I show you how simple it is to take a fresh and clean macOS 10.13.4 install, and add unofficial NVIDIA GPU support to the mix. It’s literally just a matter of disabling SIP, and running the provided script in a Terminal window. After answering a few prompts and rebooting, NVIDIA support becomes available.

Apple had official Nvidia support back in 2009:

The reference you give shows no NVIDIA driver support after OSX 10.13 released 2017. Although there are still security updates on hardware from a comparable date attempting to run that version on a new machine will be challenging.

1 Like

It might be a rather stupid idea, but here’s my opinion.

Why don’t we stop bothering about AMD GPUs, and consider adding support for Apple’s Metal API? It may be a whole new program to work through, and we can put the same features on it as on AMDGPU or on CUDA.

We currently have support for Vulkan in Julia, so I don’t think it would be something impossible.

1 Like

Welcome to the community.

The problem I see is probably not enough people in the Julia community care about Metal.

People actually want to support AMD for some (non-Mac related) reason, while I guess at some level I’m sure it could help the Mac. There’s nothing wrong with that, nor for you or anyone to work on Metal.

It doesn’t have to be either or, and in fact Vulkan may already work on Mac:
https://github.com/KhronosGroup/MoltenVK

so I only see Vulkan as a priority, not specific work for Metal.

I also suppose the reason people do not work on Metal only/directly is that the Mac is relatively unpopular (for data science), what most here use Julia for.

I don’t know enough about Metal to know if it’s mostly for [3D] graphics, or also (really) useful for GPGPU. I don’t even own a Mac, so I’m not looking much into that.

I’m all for Julia for 3D games, Julia is just not popular yet for that yet, even on Windows, and I do not foresee Julia displacing C++ for 3D games or the game engines built with it, in the near future.

To me C++ is a legacy language, with 3D games maybe the only area it’s still really useful. For everything else I can think of, I would use/recommend a different language, most often Julia (or Rust or Zig). Yes, stuff like web browsers are written in in languages like C++ and C, why we have security issues, and Mozilla migrating to Rust.

Your best bet here would be to contribute to or fork MetalCore.jl (mentioned above). It’s certainly achievable, but will require a substantial time investment.