I’ve managed to use gpu_rand via gpu_call but was wondering if it is possible to use gpu_rand via @cuda as I’m currently using a 2D block.
The @cuda function call only accepts bit types so I’m struggling to determine how to pick up the KernelContext and randstate required for gpu_rand.
Basically I’ve been working on replicating this - sampling a Categorical on the GPU
Note: global_rng --> default_rng now
The implementation is slightly different as I have a 2D array where each element is the number of samples required and each row in the array has a different weight function. To implement this I’m using a 2D block. For the time being I’ve implemented one of the previously posted
xorshift impementations but given it requires two random numbers per sample being able to work with RNG is beneficial for incrementing the seed and ultimately repeatability.
Have you find a method to generate a random number in GPU kernel function? I met the same problem with you. All
rand like function will returns a CuArray. But memory allocation is not allowed in kernel function.
So, how to get a pure function which is not wrapped in a CuArray?