I want to use the LinearAlgebra library to simplify my programming. So, I warp some Arrays in an struct. But I find I can not copy it to GPU because it is a non-bitstype.
Similarly, I want to look up some tables which is stored in a Dict. It is non-bitstype, too. Thus it can not be copied to GPU.
So, I wander that is there a trick way to copy them to GPU and complish these calculation?
You need to convert those arrays to GPU-compatible arrays, e.g. CuArray with CUDA.jl (e.g. parameterize your struct on the array type). Optionally, you can use Adapt.jl to make those conversions more generic.
You didn’t mention wrapping that struct in an array as well. That’s currently not supported by CuArray. You can work around it by eagerly converting the CuArrays to CuDeviceArray (which are isbits) using cudaconvert and storing those in a CuArray, see e.g. Passing array of pointers to CUDA kernel. But to cudaconvert your TestType here you’ll need to make it parametric (a::T, b::T, T<:AbstractArray) and add Adapt.jl rules (Adapt.adapt_structure(to, x::TestType) = TestType(adapt(to, x,a), ...)), as I mentioned above.
CuDeviceArray is just a container for a pointer, you would have allocated a CuArray in the first place. Copy that one to CPU memory using Array(::CuArray).