ArrayFire vs AbstractArray performance and future in julia

So I recently ran into ArrayFire and it’s Julia wrapper ArrayFire.jl and was impressed by the performance benchmark. I have not tried it myself and I was wondering if anyone has made any experiments with it and if it’s compatible with Flux, DifferentialEquations etc.?

AFAIK it is not, mostly because of operator coverage. I think that’s partially a reflection of how little non-CUDA GPU compute happens in ML (and what does happen, is probably covered by AMDGPU.jl or oneAPI.jl better)

2 Likes

Thanks for the response. What sparked my interest in this is FAIRs latest library flashlight :flashlight: Flashlight: Fast and flexible machine learning in C++ which builds on ArrayFire as a tensor library. These guys are usually all about performance which is why I assumed that there’s something to be said about ArrayFire. I still live in a world where i hope that the models i make can rely on a backend that always chooses the right available hardware for the job. :joy:

People have used it with DifferentialEquations.jl successfully (you can search this Discourse for many answers that do it), but usually CUDA.jl is going to be better because of how it can generate fused kernels.

4 Likes

I’m a bit confused about the title. Rather than AbstractArray perhaps you mean Array?

AFArray is an AbstractArray after all:
https://github.com/JuliaGPU/ArrayFire.jl/blob/master/src/array.jl#L1

3 Likes

Seems like even Flashlight uses MKL-DNN and CUDA for many of its backend ops. ArrayFire proper is used for basic ops and OpenCL support only. I’m not sure anyone would object to resuscitating the Julia package, but trying to fit it in the current GPUArray interface would likely be a challenge.

ArrayFire was a great way to bootstrap Julia’s GPU compute support, but it is not what you want to use for a mature, flexible language like Julia in the long term. Since Julia+GPUCompiler can generate arbitrary (fused) kernels with competitive performance to CUDA C++/ROCm HIP, there’s not really a good reason to use ArrayFire (unless your hardware isn’t supported by CUDA.jl/AMDGPU.jl/oneAPI.jl).

If you want to reach more people who have unsupported GPUs or use Windows with an AMD GPU, the better approach is to consider reviving CLArrays.jl (warning: not a small or easy task) and making it fully GPUArrays-compatible.

7 Likes

I was not aware of this. Thanks for pointing it out.