Hello,
I have a Julia package with an extension triggered by CUDA.jl. Now, I would like to define a custom kernel as a method extension of the package. In principle, I could use the CUDA kernel handling, but I understood that it is recommended to use KernelAbstractions.jl instead. Also because my package may support other types like Metal.jl in the future.
So, I need to add an extension that uses KernelAbstractions.jl.
My question is if I have to write a separate extension only for KernelAbstractions.jl. This would result in running using KernelAbstractions
together with CUDA.jl.
Are there other ways?
I was thinking of the possibility of defining the kernel directly in the CUDA extension, since KernelAbstractions is a dependency of CUDA, an I should be able to use it somehow. However, this method would be only triggered when I import CUDA, and if I use Metal this wouldn’t be exported I guess.
How current GPU libraries actually use KernelAbstractions.jl extension capabilities?
1 Like
You can write kernels with KernelAbstractions.jl
without requiring GPU-specific dependencies (CUDA, AMDGPU, etc…). You can make KA a library dependency, but this won’t trigger vendor libraries as a dependency, which is one of the reasons to use KA.
If you have code that does require vendor-specific implementations, then you would accomplish that using extensions. End-users supply the backend
when they call your library.
using MyLibrary # only dep is KernelAbstractions
using KernelAbstractions
const dev = :GPU # choose the backend
if dev === :GPU
# This will trigger a package extension for CUDA if you have one
@info "Using CUDA"
using CUDA
using CUDA.CUDAKernels
backend = CUDABackend()
ArrayT = CuArray # this can be used to aid in dispatching
CUDA.allowscalar(false)
else
backend = CPU()
ArrayT = Array
end
# make a backend-specific array
T = Float64
A = KernelAbstractions.zeros(backend, T, (10,10))
Your library would be organized something like this:
MyLibrary.jl/
- ext/
-- CUDA/
--- cuda_specific_implementation.jl
-- Metal/
--- metal_specific_implementation.jl
-- MyLibraryCUDAExt.jl
-- MyLibraryMetalExt.jl
- src/
-- MyLibrary.jl
- test/
- Project.toml
Thank you for replying.
My package is QuantumToolbox.jl. It mainly uses LinearAgebra operations which are more or less already implemented in all the backends. This means that I just need to convert the internal structures to from CPU to GPU arrays, and almost everything works thanks to multiple dispatch.
There are only minor things to be implemented for GOU arrays, and that’s why I created a CUDA extension.
However, I now need to implement mapslices
, which is not implemented for GPU arrays (see here. So I decided to write the kernel by myself, and I went into this issue of how to proceed.
So, do you recommend creating a separate extension for KernelAbstractions?
What about defining mapslices(::AbstractGPUArray, ...)
using the GPUArrays.AbstractGPUArray
type from GPUArrays.jl inside a package extension QuantumToolboxGPUArraysExt? That function can then be implemented using KernelAbstractions.jl. AbstractGPUArray
is a supertype of the various GPU array types in the Julia ecosystem.
If you load a package like CUDA.jl, Metal.jl, etc. they depend on GPUArrays.jl so the QuantumToolboxGPUArraysExt will get loaded and then you can call mapslices(::AbstractGPUArray, ...)
on the appropriate GPU array type, i.e. CuArray
, MtlArray
, etc.
(Obviously overloading Base.mapslices(::AbstractGPUArray, ...)
would be type piracy so you should probably define your own QuantumToolbox.mapslices(a::AbstractArray, ...) = Base.mapslices(a, ...)
and then overload QuantumToolbox.mapslices(::AbstractGPUArray, ...)
in QuantumToolboxGPUArraysExt.)
This means that the users have to import GPUArrays
if they want to use this functionality, right?
But how can I use KernelAbstractions.jl if the trigger of the extension is GPUArrays.jl?
Also, is it a good practice to add q GPUArrays.jl or KernelAbstractions.jl extension? Or should they directly be a dependency of the package? In the case of QuantumToolbox.jl, I would keep everything separate, in order to have the simples basic code, which can be extended by loading the extensions.
Ok, does this mean that, even if I have only the extension for GPUArrays.jl, when I load CUDA.jl for example, then I automatically load the GPUArrays.jl extension?
In other words, do I activate an extension B
if I load a package A
which has B
as a dependency?
Ok, apparently this is true. So I’m making an extension with GPUArrays.jl and KernelAbstractions.jl as triggers. This seems to work, where it is only needed to load CUDA.jl or similars and that extension will be triggered.
1 Like