Using custom structs with KernelAbstractions

Hi folks.
I am trying to pass a custom struct to a kernel. I have done this successfully with CUDA.jl and Metal.jl successfully using the Adapt package. I can get the same to work with KernelAbstractions. Here is the relevant bit:

struct UnstructuredGrid{T}
  connectivity :: T
  offsets :: T
  ncells :: Int32

function Adapt.adapt_structure(to, from::UnstructuredGrid)
  connectivity = Adapt.adapt_structure(to, from.connectivity)
  offsets = Adapt.adapt_structure(to, from.offsets)
  UnstructuredGrid(connectivity, offsets, from.ncells)

This still leads to an isbits error:

KernelError: passing and using non-bitstype argument

Argument 5 to your kernel function is of type UnstructuredGrid{Vector{Int32}}, which is not isbits:
  .connectivity is of type Vector{Int32} which is not isbits.
  .offsets is of type Vector{Int32} which is not isbits.

I thought that KernelAbstractions used the same mechanism but I could not find any documentation on it. Any help in resolving this would be very appreciated.

Does your question imply that Kitware is doing some VTK ↔ Julia integration? :slight_smile:
Alas, I don’t know enough to help you with your question though.
Perhaps related to your issue is this question on GitHub?

Definitely interested in that but at this point I am (on my own time) exploring what it takes to develop a performance portable (CPU and GPUs) visualization library purely in Julia. I am interested in understanding the level of complexity involved compared to VTK-m, which uses C++ template meta-programming + the Kokkos library to achieve it.

I have done a simple investigation into calling VTK from Julia and it is fairly easy if you expose part of VTK through a C shim. Exposing the whole VTK requires automatically generating a C shim for the library, which is doable through VTK’s own wrapper generators. It would take some effort though so we need to line up the right project for it.

1 Like

I think your adapt_structure needs to be:

function Adapt.adapt_structure(to, from::UnstructuredGrid)
  connectivity = adapt(to, from.connectivity)
  offsets = adapt(to, from.offsets)
  UnstructuredGrid(connectivity, offsets, from.ncells)

No cigar unfortunately. It doesn’t look like adapt_structure() is getting called. The code is here if you want to check it out:

I tried with CUDA also. The same.

Ah! Definitely still in a JuliaCon-fuge.

In JuliaGPU we have made the decision that we will never automatically move your data for you.
I got bitten in other frameworks where memory automatically follows the compute.
Instead our philosophy is “compute follows data”

So you need:

Contour(backend, adapt(backend, grid), adapt(backend, data), 130)

Or the same inside Contour, which is an explicit memory movement request.

As an example instead of passing the backend to Contour you could use get_backend from KernelAbstractions.jl to ask where you should execute the kernel, and leave the data-movement to the user.

Or you let the user provide the backend and then do the movement yourself.

Awesome! Thank you.