How to switch between function definitions for using CUDA.jl

I have two variants of a function: One that utilizes the CPU and an other that utilizes the GPU through CUDA.jl. I’d like to offer the user the option of switching between the two.

I’ve tried using environment variables and a const zero-dimensional array of Bool type as a flag to determine which version of the function should be compiled. The code looks something like this:

const useGPU = fill(true, ())

if useGPU
  f() = 1  # GPU computations
else
  f() = 2 # CPU computations
end

However, when doing it this way, I am not able to get reliable ‘switching’ between the two functions as precompilation does not always occur and Julia uses the cached version of the program. I cannot use function overloading because I’m constrained to use the same function signature due to other dependencies/reasons.

What would be the best way to go about doing this?

One typically preferred alternative is to dispatch between both versions of the function based on argument types, e.g., one accepting an Array for CPU storage vs. one accepting a CuArray using the GPU.

Thanks for the reply!
I couldn’t use multiple dispatch because the input arguments to f() here are not the arrays directly. They’re some convoluted structs.

But, after some exploration, I’m leaning towards utilizing multiple dispatch itself by creating an Architecture AbstractClass and calling f() as f(arch::Architecture, args...) like Oceananingans.jl does.

Still open to better ideas for forcing precompilation.

Very similar to the last approach you suggested, is to use Val:

function f(useGPU::Val{false}, args...)
    # CPU-specific implementation
    ...
end

function f(useGPU::Val{true}, args...)
    # GPU-specific implementation
    ...
end

With

const useGPU = Val(true)  # or Val(false)
f(useGPU, ...)

you would then call the (compiled) CPU/GPU version, depending on the flag value.

2 Likes