Is it possible to use @static to distinguish CPU or GPU call in CUDA?

As we all know, some codes can not run on GPU. But there will be some alternative way to solve it.

For example, sqrt can not run on GPU due to a _fpow error. But CUDA.sqrt is an alternative solution. In this condition, 2 methods will be written like the following code.

my_sqrt(x) = sqrt(x)
my_sqrt_cuda(x) = CUDA.sqrt(x)

When some upper code call this method, it have to duplicated to adapt the duplicated my_sqrt.

In C++, #if could be used to treat this problem. In julia, a @static macro also could be used to solve operating system difference.

Can I detect an expression to distinguish the device (CPU or GPU) the code is running on? If so, the code should be like:

my_sqrt(x) = @static magic_expression ? CUDA.sqrt(x) : sqrt(x)

And this method could be call on either CPU or GPU condition.

Update: I found that sqrt has been fixed in CUDA.jl. So maybe power or something else is a better sample.

I guess, the correct way of writing generic CPU/GPU code is using Cassette.jl. Take a look how it is done in, e.g. KernelAbstractions.jl:


That’s so cool!