Difference between GPUArrays.jl and KernelAbstractations.jl

Hello,

I would like to implement some simple matrix-vector multiplication, when none of them are strided arrays (e.g., @view(A[[1, 4, 5], 1:10])). Standard LinearAlgebra.mul! doesn’t support these arrays, so I thought to implement the kernel from scratch. It shouldn’t be too difficult.

I saw that in the CUDA.jl package it calls GPUArrays for non-symmetric and non-hermitian matrices. So, I think that I should first define the kernel in that package, right?

But then, I saw that there is also the KernelAbstractions.jl package, which, again, is able to define generic kernels for any platform.

I guess that I should implement this functionality in GPUArrays.jl, but what is the difference?

KernelAbstractions.jl seems to be something just for defining kernels for any platform, while GPUArrays seems to define the linear algebra behind these objects. But it seems that GPUArrays.jl has its own method to define generic kernels, why doesn’t include KernelAbstractions.jl to define these kernel?

2 Likes

GPUArrays.jl predates KernelAbstractions.jl. @leios is working on porting the GPUArrays.jl kernels to KernelAbstractions.jl: KernelAbstractions version of GPUArrays by leios · Pull Request #525 · JuliaGPU/GPUArrays.jl · GitHub

1 Like

Do you mean the opposit? Moving KernelAbstractions.jl to GPUArrays.jl kernels? I see in the PR that @leios is introducing KernelAbstractions.jl into GPUArrays.jl

Right, so porting the GPUArrays kernels to the KernelAbstractions.jl DSL. There’s no moving of packages, both will keep existing.

Ok, so I will start to implement the mul! function for non-strided arrays using KernelAbstractions.jl kernels, and then making a PR to GPUArrays.jl when the @leios ’s PR will be merged.