Hello,
I’m still a little confused about Adapt.jl
. I read several posts in the forum and tried it for my case, but it did not work…
I have a custom struct like this:
using Adapt
using CUDA
# T is either Float64 or Float32
mutable struct MyStruct{T}
a::T
b::AbstractArray{T, 1}
c::AbstractArray{T, 2}
end
struct GPUHelper{T}
a::T
b::CuDeviceVector{T, 1}
c::CuDeviceMatrix{T, 1}
end
Adapt.adapt_structure(to, MS::MyStruct) = GPUHelper(
MS.a ,
adapt(to, MS.b),
adapt(to, MS.c)
)
My aim is to pass data_cpu
into GPU (data_gpu
) and use it in a kernel function, so I did:
a = 3.8
b = rand(10)
c = rand(10, 2)
data_cpu = MyStruct{Float64}(a, b, c)
data_gpu = MyStruct{Float64}(a, CuArray(b), CuArray(c))
isbits(cudaconvert(data_gpu)) # true
function gpu!(data_gpu)
# kernel function
return nothing
end
and then, I need to copy the data_gpu
back to data_cpu
by using Adapt.jl
.
The above code works, but I have the following questions:
- In my case, fields
.b
and.c
are alwaysVector{T}
andArray{Float64, 2}
, however, it seems I cannot change their types like this. - As you can see how I get
data_gpu
. Considering the number of fields, this writing style may not be very pretty. I guess I should be able to implement it with something likedata_gpu = adapt(CuArray, data_cpu)
ordata_cpu = adapt(Array, data_gpu)
, but unfortunately I get the following error:
ERROR: MethodError: no method matching GPUHelper(::Float64, ::CuArray{Float64, 1, CUDA.Mem.DeviceBuffer}, ::CuArray{Float64, 2, CUDA.Mem.DeviceBuffer})
Closest candidates are:
GPUHelper(::T, ::CuDeviceVector{T, 1}, ::CuDeviceMatrix{T, 1}) where T at c:\test\case.jl:11
Stacktrace:
[1] adapt_structure(to::Type, MS::MyStruct{Float64})
@ Main c:\test\case.jl:16
[2] adapt(to::Type, x::MyStruct{Float64})
@ Adapt C:\Users\test\.julia\packages\Adapt\0zP2x\src\Adapt.jl:40
[3] top-level scope
@ REPL[1]:1
Thank you so much~