Calling repeat with a CUDA array changes the state of the random number generator

I am trying to refactor a code with uses random numbers and CUDA arrays.
Quite surprisingly the function repeat changes the sequence of random numbers.
I am wondering if other people see the same behavior.
Do I miss something?

julia> Random.seed!(42); @show rand()
rand() = 0.6293451231426089
0.6293451231426089

julia> A = cu(ones(Float32,10)); Random.seed!(42); repeat(A,1); @show rand()
rand() = 0.4503389405961936
0.4503389405961936

(@v1.9) pkg> st CUDA
Status `~/.julia/environments/v1.9/Project.toml`
⌅ [052768ef] CUDA v5.2.0
Info Packages marked with ⌅ have new versions available but compatibility constraints restrict them from upgrading. To see why use `status --outdated`

julia> versioninfo()
Julia Version 1.9.2
Commit e4ee485e909 (2023-07-05 09:39 UTC)
Platform Info:
  OS: Linux (x86_64-linux-gnu)
  CPU: 64 × AMD Ryzen Threadripper PRO 5975WX 32-Cores
  WORD_SIZE: 64
  LIBM: libopenlibm
  LLVM: libLLVM-14.0.6 (ORCJIT, znver3)
  Threads: 1 on 64 virtual cores
Environment:
  JULIA_ERROR_COLOR = red

julia> CUDA.versioninfo()
CUDA runtime 12.3, artifact installation
CUDA driver 12.2
NVIDIA driver 535.129.3

CUDA libraries: 
- CUBLAS: 12.3.4
- CURAND: 10.3.4
- CUFFT: 11.0.12
- CUSOLVER: 11.5.4
- CUSPARSE: 12.2.0
- CUPTI: 21.0.0
- NVML: 12.0.0+535.129.3

Julia packages: 
- CUDA: 5.2.0
- CUDA_Driver_jll: 0.7.0+1
- CUDA_Runtime_jll: 0.11.1+0

Toolchain:
- Julia: 1.9.2
- LLVM: 14.0.6

1 device:
  0: NVIDIA GeForce RTX 4090 (sm_89, 23.192 GiB / 23.988 GiB available)

Launching kernels needs random numbers, CUDA.jl/src/compiler/execution.jl at e1e5be2b6bf17f03a367cebeb18c4645e593f80d · JuliaGPU/CUDA.jl · GitHub. Maybe that ought to use its own RNG?

1 Like

Interesting! Thank you for sharing this info.
Indeed, not using the default RNG would make a call to CUDA kernel more similar to working with regular arrays.