Flux model on GPU?

How can I determine whether a Flux model is on a GPU (of any kind)? For example, if m = Dense(1, 1) |> gpu/cpu should it somehow be based on typeof(m) or are there other alternatives? My use case is that I have a (training) function with the model (amongst other) as input and if it is on a GPU I need to copy some other variables in the function to the GPU as well.

You can use the GPUArrays.device function.

1 Like

Hmm. How would you apply this? I get

julia> methods(GPUArrays.device)
# 0 methods for generic function "device" from GPUArrays

From the link a variable of type AbstractArray is assumed.

Use the mapping and traversal functions in the Functors API to apply it over a model. The reason it’s not trivial to determine if an entire model is “on GPU” is that one could construct e.g. a hybrid model that has some params on CPU and others on GPU.

1 Like

Yes, a model can live on several devices (multiple GPUs typically). If you know all the parameters of a model are on the same device, then you can do device(m.layers[1].weight), which will tell you the GPU of that specific matrix. Weirdly, I couldn’t make this work with GPUArrays, so I directly used the device function of CUDA, but it does not work on normal matrices.

So I guess to answer your question, you could do model.layers[1].weight isa GPUArrays.AnyGPUArray, which will be true for any brand of GPU. If you only care about CUDA, you can do isa CuArray and drop the GPUArrays interface.

1 Like