The need of (i)fftshift's to compute a PDE using Fourier Transform

Hello,
I’m currently writing an algorithm to solve a specific kind of PDE, via Fourier Transform.
In the several examples that I saw, the general idea is to transform everything to the frequency domain using an fft, do the computational stuff (Forward Euler, etc) and in the end transform it back to the natural domain using ifft, thus obtaining the solution to our PDE.

My difficulties come when I, naively, apply the fft and ifft to my equation. For a toy example, where the analytical solution is known I get the following:


It seems to me that some kind of boundary conditions is being wrongly satisfied. The solution doesn’t have the domain periodicity (in this case [-10;10]).

However, if I replace fft and ifft by fftshift(fft(ifftshift(U))) and fftshift(ifft(ifftshift(U))), respectively. I obtain the correct answer:

Well, this seems strange to me, because the (i)fftshifts commands are only to shift the Fourier coefficients to their regular order when transforming to the frequency domain. But this is
reestablished when we perform the inverse transform:

using FFTW
julia> A=trunc.(Int64,rand(6)*10)
6-element Array{Int64,1}:
 0
 2
 3
 7
 1
 2

julia> â=fft(A) # F. coefs with shifted order
6-element Array{Complex{Float64},1}:
 15.0 + 0.0im
 -7.0 - 1.7320508075688772im
  3.0 + 1.7320508075688772im
 -7.0 + 0.0im
  3.0 - 1.7320508075688772im
 -7.0 + 1.7320508075688772im

julia> ifft(â) # equals to original vector A
6-element Array{Complex{Float64},1}:
 0.0 + 0.0im
 2.0 + 0.0im
 3.0 + 0.0im
 7.0 + 0.0im
 1.0 + 0.0im
 2.0 + 0.0im

Someone know why this behavior happens when solving PDE using the Fourier Transform?
I really need that my algorithm be fast as possible. For each time step computing 2 shifts for every time I compute an fft it’s too much computations spent on this issue, I guess :confused:

It’s the first time that I’m using this method to solve PDEs, so I don’t know that much about signal processing and fourier transforms. Any help?

Best regards,
Tiago

Probably that means that you are taking the derivative incorrectly. See these notes (which use 0-based indexing, so you will need to subtract 1 from Julia’s usual 1-based indices to convert).

5 Likes

Thanks for the reference!! After reading carefully the notes and yes, I’m not treating the time derivative term correctly.

However, I’m a bit confused when applying it to my case. As I said, I’m dealing with a PDE, so my solution V, depends on time and space, so I’m doing two discretizations here, in space I’m using a Galerkin method (to use the Fourier Transforms) and in time I’m approximating the time derivative via an explicit Euler scheme, so I can compute V_i+1:

# Explicit Euler scheme (in Fourier domain):
# α(v_{i+1}-v_i)/Δt = v_i+(î_i-v_i+a_i) <=> v_i+1 = v_i+(Δt/α)*(î_i-v_i+a_i)
@. v = v + (dt/α)*(î - v + a)

where v, î and a are the Fourier coefficients of the respective functions V, I and A.

But, when looking to Algorithm 1 of the notes, we no longer have the idea of a finite difference, right? Because the Fourier coefficients of the derivative, Y_k', are obtained in step 2:

Since I’m using the finite difference (v_{i+1}-v_i)/Δt (fourier coefficients) this actually corresponds to Y_k' of the step 3 of Algorithm 1, or I’m viewing this the wrong way? :confused: Even if I deal with the derivative as a “whole” as proposed in Alg. 1 how it’s supposed to compute v_i+1?

Sorry for the newbie questions.

Really appreciate the help!

The notes are only about differentiating in the dimensions you are Fourier transforming. If you are Fourier transforming in space but not time, then the notes have nothing to do with your time derivatives.

(You might consider using DifferentialEquations.jl for your timestepping, i.e. using the method of lines.)

1 Like

This is just a guess of what the problem might be, but I assume that your algorithm at some point makes use of the conjugate variable in the Fourier space (usually called frequency or wavenumber). If so, are you taking into account the FFTW convention that the first component of the transformed arrays correspond to the origin (zero) of the frequency/wavenumber?

What I mean is that, assuming a space discretization step dx and an even array size N, the wavenumber has to be defined as

k = vcat( 0:div(N, 2)-1, -div(N, 2):-1 ) * 2pi/(N*dx)

or, equivalently

k = fftshift( -div(N, 2):(div(N, 2)-1) ) * 2pi/(N*dx)

but not simply as a monotonic increasing variable.

However, FFTW.jl provides the function fftfreq to make this easier,

k = fftfreq(N, 2pi/dx)

which works for even as well as odd N (You can see the docs to know the difference).

Let us know if it works!

3 Likes

(Note that this is not just an FFTW convention. It is the standard definition of the discrete Fourier transform.)

1 Like

That is a good point. Thanks for the clarification!

@stevengj, Oh I get it, of course, the differentiating is with respect to the space (where I’m doing the Fourier transform) and not with time, sorry my bad :confused:. About the timestepping I’m following a specific paper in order to implement this numerical scheme, so I’m a bit stitched to what the authors wrote.

Well… I don’t quite use the k, i.e. k is the variable of the Fourier domain. So, for example, when I apply the fft to the initial condition V0, I simply obtain v0, the variable k is kind of implied when using fft, right? Here’s a screenshot taken from Solving PDEs with the FFT, Part 2 [Python]
image
From what I understand, the variable k is only necessary when we differentiate in the dimension where the Fourier transform takes place, that is not my case.
image

To be more specific, I’m dealing with a Neural Field Equation, here’s a simplified version (in 2D):
image
Basically, the algorithm that I’m implementing is quite straightforward:

  1. Apply fft to V0 to obtain v0;
  2. Apply fft to I to obtain î in each time step, like this:
I  = [prob.I(i,ti) for i in x] # variable x is 'natural' space
î  = fft(I)
  1. According to the authors the integral term can be computed in the frequency domain by simply taking into consideration the fft of K and the non-linear functional SV, in each time step;
  2. Compute v_{i+1} (using a forward Euler scheme) then apply ifft to transform the solution back to the natural domain.

I don’t use the frequency variable k, that can be given by the command fftfreq. Is this my problem? If so, where enters k?
(one small remark, I’m using the rfft, since my data is real)

Thanks a lot for this help, now I’m trying to know more about the fftfreq and trying to understand if it fits for what I’m trying to do.

Cheers!

The wavenumber would appear if you had spatial derivatives and used FFT along that variable, which is not the case.

It seems that 1D FFT along time is all that is required?
Unless for the spatial integral the algorithm you are following uses the FFT in space?

1 Like

You are right. I can see now that you are using the Fourier Transform to calculate the spatial convolution given by the integral term (a convolution in real space turns into multiplication in the Fourier space). You don’t use k and that’s correct.

Now I find specially weird that using fftshift helps you at all, since the real transforms only compute one half of the spectrum, and ffshift will swap the first and second half of the arrays wrongfully assuming that they correspond to different halves of the spectrum.

However, you said your problem is multidimensional. I gave a quick look at the paper you linked, and from what I see, you should only transform the space dimensions (not the time!) and you should not use any fftshift. Can we know exactly which dimensions are you transforming, and which dimensions are you ffftshift-ing?

2 Likes

I don’t use the fft in time, only in space, which can be one or two-dimensional space (and this behavior happens in 2 dimensions as well, if I put the (i)fftshift’s every time that I perform the fft and its inverse).

I get what you’re saying, but the same behavior happens if I use the rfft or fft, with or without the use of (i)fftshift.

Yes, I’m only doing the transforms along the spacial variables, not in the time variable. If the domain is 1D, I perform fft to vectors and if it’s 2D I perform fft to matrices, so the (i)fftshift are applied to vectors or matrices, according to the dimension of the space.
Also, the authors of that paper, made available a script in python of that algorithm where they use the (i)fftshift when computing the transforms.

I think the problem might be here, the use of the Fourier transform to compute the convolutional term. One thing, the authors treat a slightly different equation that I posted, they work with the delayed form. The whole new idea of that paper is treating the integral term in two convolutions, one in space and other in time,
image
But in practical terms, the convolution in space is the only one computed using fft, the other is treated in a different way.

I understand that they solve the integral term as shown in Eq. (10) from the paper you posted

where the kernel L is known beforehands, and its Fourier transform, DFT[L(T_u)] is precomputed for all u.

Looking for the authors, I found two equivalent python scripts for an older paper (2010) in which they solve the integral term. In one of the scripts, they perform the fft’s as usual, but they fftshift the kernel when they first compute it. However, in the other script (‘dnf_template.py’ distributed in the DNF GUI simulator) they use the Fourier Transforms as you do; concatenating fft’s and fftshift’s.

Is it possible that the shifting becomes necessary due to the way the kernel is defined on each script? You could try to use the rfft without shifting in general, and only shift the kernel when it is first computed.

Of course, I can not know with certainty what the problem is, as I just gave a fast look over the material. I am just guessing here, and it’s very likely that I have got something wrong. However, hope it helps!

1 Like

I knew the older paper from 2010, but I was unaware of this script. And in fact, if I just perform the fftshift to the kernel k, when it’s pre-computed for all u and the correct result is obtained! So, I no longer need to perform the chain of (i)fftshift everytime I compute the transform.

However, in 2D the output is still wrong and this is actually really strange, since the algorithm is exactly the same as the 1D algorithm. The only difference is that one deals with matrices and the other with vectors.
Since I’m using the multidimensional rfft, this means dealing with matrices of dimension div(N,2)+1 X N, probably it’s not a good idea to apply the fftshift to non-square matrices. This problem doesn’t exist in 1D because the vectors are just div(N,2) long. I will try to use the ‘normal’ fft to see what I can get.

I was reading about computing convolutions using fft, and, as you said, that corresponds to a multiplication in the Fourier domain. However, computing the product of two fft corresponds to a circular convolution, not linear.
Probably that’s why the need of shifting the kernel first

Thanks for the hint!!

1 Like