How choose which GPU to send the workload to in Flux.jl?

I have set up a neural network in Flux and I would like to train it on a GPU. HOwever my laptop has two GPUs and one is an onboard GPU so I like to send the workload to the better GPU.

But how I choose which GPU to send the workload to?

using Flux

policy = Flux.Chain(
  Conv((2,2), 1=>128, relu)
  ,Conv((2,2), 128=>128, relu)
  ,x -> reshape(x, :, size(x,4))
  ,Dense(512, 512, relu)
  ,Dense(512, 4)
  ,softmax
  ) |> gpu

@time training_sample = cat((init_game() .|> Float64 for i=1:10_000)..., dims=4) |> gpu
@time policy(training_sample)

If you’re using CUDA you don’t need to worry about that, since it won’t be able to use your onboard GPU :wink:
Anyways, this is how you could do it:

from the CUDANative Docs:

This code is executed in a default, global context for the first device in your system. Similar to cudaSetDevice , you can switch devices by calling CUDAnative’s device! function:

# change the active device
device!(1)

# the same, but only temporarily
device!(2) do
    # ...
end

If nothing weird is happening in CuArrays, this should also set the device for Flux using the GPU :wink:

3 Likes

can you please also tell me, how to choose the number of gpus, like I have 3 gpus but the flux automatically use only first one. I want to distribute the load on all three. Thanks

Simon’s code above just that? You’d have to change between all 3 devices.