FFT and IntervalArithmetic

Good evening,

I keep getting the following error message

ERROR: LoadError: MethodError: no method matching plan_bfft(::Array{Complex{IntervalArithmetic.Interval{Float64}},2}, ::UnitRange{Int64})

Does anyone know if IntervalArithmetic is compatible with fft ? I could not find any documentation about it.
Or perhaps this is due to a different bug in my code.

Iā€™ve spent some time thinking about this myself, mostly in the context of convolution. My tentative conclusion is that itā€™s not a straightforward operation, and at least for the purposes of convolution it plays quite badly with respect to interval arithmeticā€™s tendency to widen the interval for each arithmetic operation. If your goal is to compute convolutions with small kernels, Iā€™ve had more luck with real-space approaches.

Unfortunately, in general, I need to do convolutions on arrays that are not small. Using loops, computing these convolutions (a priori any number of them) becomes a nightmare.

Do you think this would eventually be implemented in the package ?

As a C-library wrapper, FFTW probably canā€™t support this properly. But a pure-Julia implementation of the fast DFT should do it straightforwardly. I seem to remember the existence of such a package but my search skills were not up to finding it quickly. Maybe someone else knows?

Whether itā€™s useful is another question, Iā€™d be interested in your experience.

About the use:
I work on computer-assisted proofs in differential equations. Along the steps, a key part of the work is to write the nonlinear vector field as a polynomial.
Using the regularity/property of the solutions of the differential operator, we pick a basis and wish to compute the coefficients (e.g Taylor/Fourier series).
Since the vector field is a polynomial, discrete convolutions occurs and this is where FFT comes in.

So, computing convolutions efficiently with interval arithmetics is a crucial part of the work. Until now, the work was done using Matlab and IntLab. Hopefully there is a solution, my experience with Julia and IntervalArithmetic has been great :grin:

Do you actually need the FFT? Isnā€™t the DFT enough? I.e. just write out the sum that occurs in the definition. In this case, it should ā€œjust workā€ with IntervalArithmetic.jl, except possibly that complex number support is not so great still.

I am not an expert, but it seems like something like the following should work:


julia> using IntervalArithmetic

julia> mypi = pi_interval(Float64)
[3.14159, 3.1416]

julia> ii = 0 + im
0 + 1im

julia> exp(-2*mypi*ii / 10)
[0.809016, 0.809017] - [0.587785, 0.587786]*im

julia> xx = [i..i for i in 1:1024];

julia> k = 3
3

julia> N = length(xx)
1024
julia> Xk = sum(xx[n] * exp(-2*mypi*ii / 1024 * k * n) for n in 1:N)
[511.999, 512.001] + [55627.1, 55627.2]*im
1 Like

Isnā€™t FFT just a fast implementation of the DFT ?
Indeed, as for now the DFT will have to do.
I am curious, could you elaborate on ā€œcomplex number support is not so great stillā€ ? Or perhaps provide a link ?

Yes, exactly, FFT is just a fast implementation of DFT. But itā€™s quite possible / probable that you donā€™t really need it to be ā€œfastā€?

I was wondering whether the complex intervals in that equation would work, but apparently they do. At some point we are expecting to no longer have an Interval be a subtype of Real, in which case we will have to write our own complex routines (some of which already exist).

If you want to do anything rigorous with this stuff, then normally you will need to estimate the error when using a finite number of points in your DFT. But Iā€™m guessing that you already have this under control.

Well I guess you are right, some problems do not require a ā€œfastā€ DFT.
Nonetheless, the faster the better :slightly_smiling_face:. Especially if we have convolutions on a 2d / 3d solution. To give some idea about the order, I am dealing with an array containing about 1000 elements for which I do a cubic convolution, so I would have liked to get the runtime O(n*log(n)) given by fft.

Yes indeed, I have to be more careful with the error if I go with the DFT.
Thanks for your help!

I have successfully used the implementation here
https://github.com/MikaelSlevinsky/FastTransforms.jl/blob/master/src/fftBigFloat.jl#L16
The functions there are type restricted to BigFloat etc., but these restrictions can be removed and they work equally well on DualNumbers etc.

5 Likes

This is surprising since the FFT is both an orthogonal operation and stable. It seems like one would be able to find sharp L2 bounds on the error in the FFT in the literature, and then one could embed a vector of intervals inside a ball.

Itā€™s not necessarily ā€œerror in the FFTā€ that youā€™re trying to estimate here. Suppose you want to take the DFT of fill(a, n) where a = -1..1: among the concrete instantiations of this signal are fill(0, n) (which has a DFT of all-zeros) and sin.(Ī±*x) (which has all power concentrated at single frequencies). As a consequence, to me it seems the DFT will have to span something like fill((-n..n) + (-n..n)*im, n). Then when you take the IFFT of this you get something considerably wider than the original signal.

What youā€™re running up against here is conceptually equivalent to the fact that while 1 - 1 == 0, (0..1) - (0..1) == (-1..1). You might be able to do better using affine arithmetic (because it keeps track of where each interval comes from and thus can permit more cancellation) but itā€™s not obvious to me that this can be performed efficiently.

To give some idea about the order, I am dealing with an array containing about 1000 elements for which I do a cubic convolution, so I would have liked to get the runtime O(n*log(n)) given by fft.

If youā€™re doing naive FFTs, the O(n^2) nature of the problem is the same as if you did it by direct (real-space) convolution. Moreover, with interval arithmetic the two are not guaranteed to come to the same answer. My intuition is that you will likely get a much better answer (i.e., with tighter bounds) doing it in real space, since the FFT ā€œmixesā€ the widths of all the intervals whereas the real-space version may (for kernels with many zeros) do less mixing.

@dlfivefifty well there is a ā€œproblemā€ (we consider this a beautiful feature) with complex multiplication that it rotates. So a simple multiplication by e^(iĻ€/2) of a square box will ask You to widen the box by sqrt(2)ā€¦

Yes if you rotate 2 entries at a time it widens a lot. But if you have a bound on the L^2 norm you should be able to bound the total error ( that is, embed a product of intervals in an n-ball) without bounding each operation at a time.

Thatā€™s true, but with IntervalArithmetic You only get L-āˆž norm. The one Youā€™re probably thinking about is ball arithmetic?where You store best approximation of mid and minimal radius of ball in L^p

I guess what Iā€™m thinking is if you know the worst case L^infty norm you donā€™t need to do each step of the algorithm with interval arithmetic: do it with floating point arithmetic using the midpoints and then decide the output intervals using the norm. Using L2 norms would just be an intermediary step, as itā€™s an orthogonal operation in L2.

Unless Iā€™m mistaken the intervals should only grow like O(n), which should be pretty sharp.

2 Likes

If you come up with something concrete Iā€™d certainly be interested in hearing about it.

1 Like