Are there any way to copy a Dict or a custom datatype which contains an Array to GPU by CUDA?

I want to use the LinearAlgebra library to simplify my programming. So, I warp some Arrays in an struct. But I find I can not copy it to GPU because it is a non-bitstype.

Similarly, I want to look up some tables which is stored in a Dict. It is non-bitstype, too. Thus it can not be copied to GPU.

So, I wander that is there a trick way to copy them to GPU and complish these calculation?

You need to convert those arrays to GPU-compatible arrays, e.g. CuArray with CUDA.jl (e.g. parameterize your struct on the array type). Optionally, you can use Adapt.jl to make those conversions more generic.

Dictionaries won’t work.

This condition that CuArray wrapped by a struct is also not work. Are there any solutions?

using CUDA

struct TestType
    a::CuArray
    b::CuArray
    c::Float32
end

data = CUDA.fill(TestType([1., 2., 3.], [4., 5., 6.], 7.), 100)

It will get a

ERROR: LoadError: CuArray only supports bits types

as well.

Are there any tricks to make it worked?

You didn’t mention wrapping that struct in an array as well. That’s currently not supported by CuArray. You can work around it by eagerly converting the CuArrays to CuDeviceArray (which are isbits) using cudaconvert and storing those in a CuArray, see e.g. Passing array of pointers to CUDA kernel. But to cudaconvert your TestType here you’ll need to make it parametric (a::T, b::T, T<:AbstractArray) and add Adapt.jl rules (Adapt.adapt_structure(to, x::TestType) = TestType(adapt(to, x,a), ...)), as I mentioned above.

3 Likes

OK. Thank you for your answer.

Sorry for another problem. How can I copy a CuDeviceArray back to memory?

CuDeviceArray is just a container for a pointer, you would have allocated a CuArray in the first place. Copy that one to CPU memory using Array(::CuArray).

1 Like