ERROR: LoadError: MethodError: no method matching plan_bfft(::Array{Complex{IntervalArithmetic.Interval{Float64}},2}, ::UnitRange{Int64})

Does anyone know if IntervalArithmetic is compatible with fft ? I could not find any documentation about it.
Or perhaps this is due to a different bug in my code.

Iāve spent some time thinking about this myself, mostly in the context of convolution. My tentative conclusion is that itās not a straightforward operation, and at least for the purposes of convolution it plays quite badly with respect to interval arithmeticās tendency to widen the interval for each arithmetic operation. If your goal is to compute convolutions with small kernels, Iāve had more luck with real-space approaches.

Unfortunately, in general, I need to do convolutions on arrays that are not small. Using loops, computing these convolutions (a priori any number of them) becomes a nightmare.

Do you think this would eventually be implemented in the package ?

As a C-library wrapper, FFTW probably canāt support this properly. But a pure-Julia implementation of the fast DFT should do it straightforwardly. I seem to remember the existence of such a package but my search skills were not up to finding it quickly. Maybe someone else knows?

Whether itās useful is another question, Iād be interested in your experience.

About the use:
I work on computer-assisted proofs in differential equations. Along the steps, a key part of the work is to write the nonlinear vector field as a polynomial.
Using the regularity/property of the solutions of the differential operator, we pick a basis and wish to compute the coefficients (e.g Taylor/Fourier series).
Since the vector field is a polynomial, discrete convolutions occurs and this is where FFT comes in.

So, computing convolutions efficiently with interval arithmetics is a crucial part of the work. Until now, the work was done using Matlab and IntLab. Hopefully there is a solution, my experience with Julia and IntervalArithmetic has been great

Do you actually need the FFT? Isnāt the DFT enough? I.e. just write out the sum that occurs in the definition. In this case, it should ājust workā with IntervalArithmetic.jl, except possibly that complex number support is not so great still.

I am not an expert, but it seems like something like the following should work:

julia> using IntervalArithmetic
julia> mypi = pi_interval(Float64)
[3.14159, 3.1416]
julia> ii = 0 + im
0 + 1im
julia> exp(-2*mypi*ii / 10)
[0.809016, 0.809017] - [0.587785, 0.587786]*im
julia> xx = [i..i for i in 1:1024];
julia> k = 3
3
julia> N = length(xx)
1024
julia> Xk = sum(xx[n] * exp(-2*mypi*ii / 1024 * k * n) for n in 1:N)
[511.999, 512.001] + [55627.1, 55627.2]*im

Isnāt FFT just a fast implementation of the DFT ?
Indeed, as for now the DFT will have to do.
I am curious, could you elaborate on ācomplex number support is not so great stillā ? Or perhaps provide a link ?

Yes, exactly, FFT is just a fast implementation of DFT. But itās quite possible / probable that you donāt really need it to be āfastā?

I was wondering whether the complex intervals in that equation would work, but apparently they do. At some point we are expecting to no longer have an Interval be a subtype of Real, in which case we will have to write our own complex routines (some of which already exist).

If you want to do anything rigorous with this stuff, then normally you will need to estimate the error when using a finite number of points in your DFT. But Iām guessing that you already have this under control.

Well I guess you are right, some problems do not require a āfastā DFT.
Nonetheless, the faster the better . Especially if we have convolutions on a 2d / 3d solution. To give some idea about the order, I am dealing with an array containing about 1000 elements for which I do a cubic convolution, so I would have liked to get the runtime O(n*log(n)) given by fft.

Yes indeed, I have to be more careful with the error if I go with the DFT.
Thanks for your help!

This is surprising since the FFT is both an orthogonal operation and stable. It seems like one would be able to find sharp L2 bounds on the error in the FFT in the literature, and then one could embed a vector of intervals inside a ball.

Itās not necessarily āerror in the FFTā that youāre trying to estimate here. Suppose you want to take the DFT of fill(a, n) where a = -1..1: among the concrete instantiations of this signal are fill(0, n) (which has a DFT of all-zeros) and sin.(Ī±*x) (which has all power concentrated at single frequencies). As a consequence, to me it seems the DFT will have to span something like fill((-n..n) + (-n..n)*im, n). Then when you take the IFFT of this you get something considerably wider than the original signal.

What youāre running up against here is conceptually equivalent to the fact that while 1 - 1 == 0, (0..1) - (0..1) == (-1..1). You might be able to do better using affine arithmetic (because it keeps track of where each interval comes from and thus can permit more cancellation) but itās not obvious to me that this can be performed efficiently.

To give some idea about the order, I am dealing with an array containing about 1000 elements for which I do a cubic convolution, so I would have liked to get the runtime O(n*log(n)) given by fft.

If youāre doing naive FFTs, the O(n^2) nature of the problem is the same as if you did it by direct (real-space) convolution. Moreover, with interval arithmetic the two are not guaranteed to come to the same answer. My intuition is that you will likely get a much better answer (i.e., with tighter bounds) doing it in real space, since the FFT āmixesā the widths of all the intervals whereas the real-space version may (for kernels with many zeros) do less mixing.

@dlfivefifty well there is a āproblemā (we consider this a beautiful feature) with complex multiplication that it rotates. So a simple multiplication by e^(iĻ/2) of a square box will ask You to widen the box by sqrt(2)ā¦

Yes if you rotate 2 entries at a time it widens a lot. But if you have a bound on the L^2 norm you should be able to bound the total error ( that is, embed a product of intervals in an n-ball) without bounding each operation at a time.

Thatās true, but with IntervalArithmetic You only get L-ā norm. The one Youāre probably thinking about is ball arithmetic?where You store best approximation of mid and minimal radius of ball in L^p

I guess what Iām thinking is if you know the worst case L^infty norm you donāt need to do each step of the algorithm with interval arithmetic: do it with floating point arithmetic using the midpoints and then decide the output intervals using the norm. Using L2 norms would just be an intermediary step, as itās an orthogonal operation in L2.

Unless Iām mistaken the intervals should only grow like O(n), which should be pretty sharp.