CURAND no longer makes CU Array for Complex Numbers

I feel I am doing something wrong. I’m not an expert in this GPU coding, so I was happy when I got this working maybe a year or two ago. I’m revisiting this code and it has stopped working.

I have a vector of complex numbers representing a RF signal. I need to add noise to that signal. I want it to work on the GPU. I am using CuArrays since I don’t understand the complexities of using a kernel and keeping the random number stream straight. It probably would be more efficient as a kernel though if I understood that, but I don’t think it matters for my problem.

I had some code that used to work when generating complex random numbers. I updated and now it doesn’t. Not sure when this stopped working, but I’d like to know the correct syntax.

I think you can see the general problem. The randn function does different things if the type is Float64 verses ComplexF64.

This code generates a CuArray if the type is real.

julia> Random.randn(CURAND.default_rng(),Float64,10)
10-element CuArray{Float64, 1, CUDA.Mem.DeviceBuffer}:
  0.49637250761721785
  1.1621209684440397
 -1.200133485055596
 -0.3641995814787512
 -3.159735223992887
  1.8079578340924842
  1.180847386516405
 -0.9288064979411169
 -0.21460057071319238
  2.07001090351318

While the complexF64 version generates a vector instead of a CuArray. This causes my code to get upset since it expects a CuArray to be generated to add to operate on the CuArray that was passed in. To fix it seems like a mess since I want the same code to work with CPU or GPU as the RNG gets passed in.

julia> Random.randn(CURAND.default_rng(),ComplexF64,10)
10-element Vector{ComplexF64}:
   0.7718688832969476 + 0.7867351684604367im
  -0.4562811724789016 - 1.3444512656128818im
  -1.2646604765663483 + 0.7447540921496932im
  -0.7259376389798406 - 0.0653711465093766im
   0.4698858817529208 + 0.5430800530014905im
   0.9838594906913766 + 1.203688870480887im
 -0.19137495196505339 - 0.5837290641882306im
  0.19036064921205195 + 0.12221062922729947im
  -0.9799564403417708 + 1.1659964057563248im
  0.40613987626465775 + 0.4489427349696575im

Is this a bug? To be fair, the old syntax that worked used CUDA instead of CURAND. I’m not sure if that gets back to how old it might have been. I lost track. I could try to hunt it down if it is important.

julia> versioninfo()
Julia Version 1.9.2
Commit e4ee485e90 (2023-07-05 09:39 UTC)
Platform Info:
OS: Windows (x86_64-w64-mingw32)
CPU: 12 × Intel(R) Core™ i7-10850H CPU @ 2.70GHz
WORD_SIZE: 64
LIBM: libopenlibm
LLVM: libLLVM-14.0.6 (ORCJIT, skylake)
Threads: 16 on 12 virtual cores
Environment:
JULIA_DIR = C:\Users\username\AppData\Local\Programs\Julia-1.9.2
JULIA_PKG_DEVDIR = D:\JULIA\DEVELOPMENT
JULIA_PKG_USE_CLI_GIT = true
JULIA_SSL_CA_ROOTS_PATH =
JULIA_EDITOR = code
JULIA_NUM_THREADS =

[621f4979] AbstractFFTs v1.5.0
⌅ [79e6a3ab] Adapt v3.7.1
⌃ [4c88cf16] Aqua v0.7.4
[8b73e784] ArtifactUtils v0.2.4
⌃ [6e4b80f9] BenchmarkTools v1.3.2
⌃ [052768ef] CUDA v5.1.0
[35d6a980] ColorSchemes v3.24.0
[a80b9123] CommonMark v0.8.12
⌃ [f68482b8] Cthulhu v2.9.6
[717857b8] DSP v0.7.9
[8bb1440f] DelimitedFiles v1.9.1
⌃ [aae7a2af] DiffEqFlux v2.4.0
⌃ [0c46a032] DifferentialEquations v7.10.0
[58007942] EulerYPRType v0.3.2
⌃ [7a1cc6ca] FFTW v1.7.1
⌃ [5789e2e9] FileIO v1.16.1
⌃ [587475ba] Flux v0.14.6
⌃ [0337cf30] GRUtils v0.8.3
[5c1252a2] GeometryBasics v0.4.9
[6218d12a] ImageMagick v1.3.0
[916415d5] Images v0.26.0
⌅ [a98d9a8b] Interpolations v0.14.7
[3bf4e53f] LMQuaternions v0.6.5
[726dbf0d] LicenseCheck v0.2.2
[093fc24a] LightGraphs v1.3.5
[23992714] MAT v0.10.6
⌃ [ee78f7c6] Makie v0.19.12
⌅ [20f20a25] MakieCore v0.6.9
[777085ab] Mat3s v0.2.0
[7269a6da] MeshIO v0.4.10
⌃ [961ee093] ModelingToolkit v8.70.0
[510215fc] Observables v0.5.5
⌃ [6fe1bfb0] OffsetArrays v1.12.10
⌅ [1dea7af3] OrdinaryDiffEq v6.58.2
⌃ [e713c705] PackageAnalyzer v3.0.1
[f0f68f2c] PlotlyJS v0.18.11
[91a5bcdd] Plots v1.39.0
⌃ [c3e4b0f8] Pluto v0.19.32
⌃ [7f904dfe] PlutoUI v0.7.53
[3cdcf5f2] RecipesBase v1.3.4
⌅ [731186ca] RecursiveArrayTools v2.38.10
⌃ [295af30f] Revise v3.5.8
⌃ [1ed8b502] SciMLSensitivity v7.47.0
[00bb0486] StandardAerospace v0.3.1
⌃ [90137ffa] StaticArrays v1.6.5
⌅ [0c5d862f] Symbolics v5.10.0
⌃ [e88e6eb3] Zygote v0.6.67
[81553e4a] rfEnviroSim v0.3.6 ..
[7e2434f1] xyzVector v0.6.2
[37e2e46d] LinearAlgebra
[44cfe95a] Pkg v1.9.2
[9a3f8284] Random
[10745b16] Statistics v1.9.0

CURAND doesn’t support ComplexF64. Use CUDA.default_rng() for the native RNG, or even better, use rand! on a CuArray{ComplexF64} to auto-select an appropriate RNG.

1 Like

Hmm. I just changed it to CURAND because I thought that had replaced CUDA based on some of the documentation I saw to make things more generic across different GPU systems. Is there a good book or other resources that helps me get up to speed in taking designs to the GPU using Julia? I feel I’m leaving a lot of efficiency on the table.

I was trying to fix this problem by going to CURAND.

julia> Random.randn(CUDA.default_rng(),ComplexF64)
ERROR: MethodError: randn(::CUDA.RNG, ::Type{ComplexF64}) is ambiguous.

It used to work.

I’m not sure what you mean by that. By hard-coding CURAND.default_rng(), or even CUDA.default_rng(), you’re essentially locking your code to CUDA.jl, or even more specifically CURAND. Instead, it’s better to try and write generic array code that takes arbitrary inputs, and here calls rand! on them.

I agree that shouldn’t error, but that isn’t something you want to do anyway, because you’re asking for a single element there. If you add dimensions, those calls aren’t ambiguous.

Thank you for the excellent help. I got it working by using the randn! function and passing in the RNG with an existing CUArray which could be reused by the function.