How to manage and reuse FFTW.jl plans for multiple input sizes in Julia?

I am working on a Julia project where I need to perform FFTs on inputs of different sizes(and eltype, fft/ifft, in/out place). FFTW.jl allows creating plans with plan_fft or plan_fft!, which can significantly speed up repeated FFTs of the same size.

However, FFT plans are tied to the input’s size, element type, and transform dimensions. This means that for inputs with varying shapes, I need different plans. My questions are:

  1. How should I manage and cache FFT plans for different input sizes and dimensions?
  2. What is the recommended way to key/index these plans (e.g., by size, element type, dimensions, inplace/out-of-place, transform direction)?
  3. Are there any real-world Julia projects or libraries that implement a multi-size FFT plan caching mechanism that I could reference?

Example scenario:
I have a function f(x) that performs FFT on various data blocks with different sizes(x). I want to avoid creating a new plan for each block if a suitable plan already exists. I’m looking for patterns or examples for efficiently storing, retrieving, and reusing FFT plans in Julia.

Thanks in advance for any insights or references!

If the only thing that is changing is the size, I would just make a Dict{Int, <plan type>} mapping sizes to plans. Don’t try to come up with a solution for all possible use cases — just do something specific to your particular problem.

So this kind of dictionary is saved in use fft_plan is it necessary? In this case is the dictionary saved as a global variable?

You’re presumably calling your function f(x) from some other function (which might be called by yet another function, etc.). Initialize the dict in your outermost function and pass it through as an argument f(x, plans).
get! provides nice syntax for adding elements to a dict if they’re not present. The inner function could look something like this:

function process_signal!(x::Vector{T}, plans::Dict) where {T<:Real}
    key = (length(x), T)
    plan = get!(plans, key) do
        # Make and cache a plan for this shape/type the first time it appears
        plan_fft(similar(x); flags=FFTW.MEASURE)
    end
    return plan * x
end

If this is meant for a high-performance application, you’ll need to be mindful of type stability - you might want to dispatch to functors parametrized on dimensionality, eltype, etc., with each concrete functor owning a dict of different transform sizes.

Thank you. That’s cool!
I have a few other questions.

  1. Can this dictionary be saved as a global variable ? Does saving as a constant improve the efficiency of the dictionary (this function will be called many times)
  2. Initialization plan_fft used similar(x), will the cost of this memory allocation be high when the fft points are large? Is there any way to reduce this memory allocation? Or is this allocation insignificant?
  3. What’s the difference of plan_fft and plan_fft! ? If I were to initialize plan_fft, then mul! (A,plan,A) is it equivalent to in-place?
  1. Yes, the cache could be a global variable, but if a global is not marked as a const, performance will likely degrade because the global’s type could change, which destabilizes type inference. There should be little to no difference in performance between a const global cache and a cache passed through as an argument, but I would choose the latter for the flexibility it offers - easier to prototype without needing to restart Julia because you need to change the type of a constant, easier to extend to multithreading without data races.
  2. You shouldn’t really need similar; the FFTW planner doesn’t modify the inputs in either the in-place or out-of-place case, but the impact is negligible compared to the planning overhead:
julia> using BenchmarkTools, FFTW

julia> v = ones(1024) .+ 1im;

julia> @btime similar($v);
  198.498 ns (3 allocations: 16.06 KiB)

julia> @btime plan_fft!($v)
  5.550 μs (4 allocations: 224 bytes)
FFTW in-place forward plan for 1024-element array of ComplexF64
(dft-ct-dit/32
  (dftw-direct-32/8 "t3fv_32_avx2_128")
  (dft-directbuf/34-32-x32 "n2fv_32_avx2"))
  1. plan_fft creates a plan that doesn’t modify the input data; plan_fft! modifies. You can do mul!(Â, plan, A) with plan_fft, but  and A cannot be aliased to the same underlying array (so no mul!(A, plan, A)).