Hi folks.
I am trying to pass a custom struct to a kernel. I have done this successfully with CUDA.jl and Metal.jl successfully using the Adapt package. I can get the same to work with KernelAbstractions. Here is the relevant bit:
struct UnstructuredGrid{T}
connectivity :: T
offsets :: T
ncells :: Int32
end
function Adapt.adapt_structure(to, from::UnstructuredGrid)
connectivity = Adapt.adapt_structure(to, from.connectivity)
offsets = Adapt.adapt_structure(to, from.offsets)
UnstructuredGrid(connectivity, offsets, from.ncells)
end
This still leads to an isbits error:
KernelError: passing and using non-bitstype argument
Argument 5 to your kernel function is of type UnstructuredGrid{Vector{Int32}}, which is not isbits:
.connectivity is of type Vector{Int32} which is not isbits.
.offsets is of type Vector{Int32} which is not isbits.
I thought that KernelAbstractions used the same mechanism but I could not find any documentation on it. Any help in resolving this would be very appreciated.
Does your question imply that Kitware is doing some VTK ↔ Julia integration?
Alas, I don’t know enough to help you with your question though.
Perhaps related to your issue is this question on GitHub?
Definitely interested in that but at this point I am (on my own time) exploring what it takes to develop a performance portable (CPU and GPUs) visualization library purely in Julia. I am interested in understanding the level of complexity involved compared to VTK-m, which uses C++ template meta-programming + the Kokkos library to achieve it.
I have done a simple investigation into calling VTK from Julia and it is fairly easy if you expose part of VTK through a C shim. Exposing the whole VTK requires automatically generating a C shim for the library, which is doable through VTK’s own wrapper generators. It would take some effort though so we need to line up the right project for it.
In JuliaGPU we have made the decision that we will never automatically move your data for you.
I got bitten in other frameworks where memory automatically follows the compute.
Instead our philosophy is “compute follows data”
Or the same inside Contour, which is an explicit memory movement request.
As an example instead of passing the backend to Contour you could use get_backend from KernelAbstractions.jl to ask where you should execute the kernel, and leave the data-movement to the user.
Or you let the user provide the backend and then do the movement yourself.