I am using CUDAnative now and wonder if anyone could point me to a correct direction such that these operations is possible. Is there anyone else found that it will be easier if these 2 functions are supported? I understand that would require some coding in Pointer conversion etc…
composite type array for GPU
struct Type1
field1::Int32
field2::Int32
end
array = Vector{Type1}
function kernel_1(a::array)
end
composite with array field
struct Type2
field1::Vector{Int32}
field2::Int32
end
function kernel_2(b::Type2)
end
julia> struct Type1
field1::Int32
field2::Int32
end
julia> array = [Type1(1,2)]
1-element Array{Type1,1}:
Type1(1, 2)
julia> gpu_array = adapt(CuArray, array) # same as `cu` but without the Float32 conversions
1-element CuArray{Type1,1,Nothing}:
Type1(1, 2)
julia> @cuda ((a)->(@cuprintln(a[1].field1); nothing))(gpu_array)
julia> 1
for Type2 you need to register a GPU-compatible counterpart structs and how to convert to them. But there’s a couple of catches: given how Type2 is currently declared with a hard-coded Vector field, you’d need a Type2GPU field to represent the type with the vector on the GPU, and another for actual use on the GPU. Furthermore, Adapt (the machinery underpinning these conversions) doesn’t change the element type of an array. So it’s probably easier if you use NamedTuple’s instead:
Yes thanks for your comments it really works. And I just realized if everything in type2 is tuple instead of vector then everything that I wanted will just work, due to the static allocation.