I have set up a neural network in Flux and I would like to train it on a GPU. HOwever my laptop has two GPUs and one is an onboard GPU so I like to send the workload to the better GPU.
But how I choose which GPU to send the workload to?
This code is executed in a default, global context for the first device in your system. Similar to cudaSetDevice , you can switch devices by calling CUDAnative’s device! function:
# change the active device
device!(1)
# the same, but only temporarily
device!(2) do
# ...
end
If nothing weird is happening in CuArrays, this should also set the device for Flux using the GPU
can you please also tell me, how to choose the number of gpus, like I have 3 gpus but the flux automatically use only first one. I want to distribute the load on all three. Thanks