[Question] Distributions.jl with CUDA

I’m writing a program which I wish could easily work both cpu and gpu, similarly to what one can achieve in Flux with cpu and gpu functions. I was under impression that different Julia packages compose well, so I expected the following to work:

using CUDA
using Distributions

rand(Random.GLOBAL_RNG, LogNormal(1.0, 1.0), 10)
rand(CUDA.CURAND.default_rng(), LogNormal(1.0, 1.0), 10)

I could then control which random number generator to use via a flag which would in turn dispatach to CPU or GPU appropriately. However the code above fails in the last line.

The question I have is whether I’m doing something non idiomatic and there’s an easy way to make this work. I was hoping in Julia I can avoid writing a boilerplate code, like so:

function rand_logn(n, mean, stddev)
    if use_gpu
        return CUDA.rand_logn(n, mean=mean, stddev=stddev)
        return rand(LogNormal(mean, stddev), n)