FFT planning with flags fails for CUDA

Hey,

I was trying to do a FFT plan for a CuArray. The following works:

julia> using CUDA, CUDA.CUFFT

julia> x = CUDA.rand(2, 2)
2×2 CuArray{Float32, 2}:
 0.318697  0.144699
 0.631969  0.433798

julia> fft(x)
2×2 CuArray{ComplexF32, 2}:
  1.52916+0.0im     0.37217+0.0im
 -0.60237+0.0im  -0.0241727+0.0im

julia> p = plan_fft(x);

julia> p * x
2×2 CuArray{ComplexF32, 2}:
  1.52916+0.0im     0.37217+0.0im
 -0.60237+0.0im  -0.0241727+0.0im

Unfortunately I cannot get planning with a extra flag working:

help?> plan_fft
search: plan_fft plan_fft! plan_rfft plan_ifft plan_bfft plan_ifft! plan_bfft! plan_irfft plan_brfft

  plan_fft(A [, dims]; flags=FFTW.ESTIMATE, timelimit=Inf)
[...]

julia> p = plan_fft(x, flags=FFTW.ESTIMATE);
ERROR: UndefVarError: FFTW not defined
Stacktrace:
 [1] top-level scope
   @ REPL[10]:1

Which makes sense since FFTW is not loaded. But the following is also clear to cause troubles since FFTW is not connected to CUDA (is it?). So the help is quite misleading.

julia> using FFTW

julia> p = plan_fft(x, flags=FFTW.ESTIMATE);

julia> p * x
ERROR: MethodError: *(::FFTW.cFFTWPlan{ComplexF32, -1, false, 2, UnitRange{Int64}}, ::CuArray{ComplexF32, 2}) is ambiguous. Candidates:
  *(p::AbstractFFTs.Plan{T}, x::CuArray) where T in CUDA.CUFFT at /home/fxw/.julia/packages/CUDA/UC5gt/lib/cufft/fft.jl:11
  *(p::FFTW.cFFTWPlan{T, K, false, N, G} where G where N, x::StridedArray{T, N}) where {T, K, N} in FFTW at /home/fxw/.julia/packages/FFTW/G3lSO/src/fft.jl:714
Possible fix, define
  *(::FFTW.cFFTWPlan{T, K, false, N, G} where G where N, ::CuArray{T, N}) where {T, K, N}
Stacktrace:
 [1] *(p::FFTW.cFFTWPlan{ComplexF32, -1, false, 2, UnitRange{Int64}}, x::CuArray{Float32, 2})
   @ CUDA.CUFFT ~/.julia/packages/CUDA/UC5gt/lib/cufft/fft.jl:11
 [2] top-level scope
   @ REPL[13]:1

I tried to find something about the flags for CUFFT but couldn’t find something useful :confused:
So can anybody point me in the right direction?

Thanks,

Felix

That ‘misleading’ docstring comes from AbstractFFTs.jl, and those flags are FFTW.jl specific. AFAIK the CUDA.jl wrappers for CUFFT do not support any flags currently. If that’s a problem, and you want a flag that’s supported by the underlying CUFFT library, you could have a look at exposing that through the wrappers in here: CUDA.jl/fft.jl at master · JuliaGPU/CUDA.jl · GitHub

1 Like

Thanks a lot!

On Page 82 in the cufft manual it is mentioned:
“Planner flags are ignored and the same plan is returned regardless”

So I’m not sure whether these flags exist at all? Since it’s not critical for me, I’ll take the default.