Julia Complex Exponential Evaluation vs Matlab

I have a vector that is 2048 that I have coded up in Julia and Matlab. The values of the vector in both Matlab is given by the following function:

nn=2048 ;
h= 2 * pi/nn;
g = x->exp(im * k * dot(vec,x));
G = [g( [cos(i
h),sin(i* h)] ) for i=0:nn-1]

Where x is a discretion of the unit circle. I have two different values of k that I need to evaluate the function for above: 9 and 9*im. For the real k, both Matlab and Julia seem to agree fairly well, but there seems to be an issue with the complex value of k. When I compare the outputs of G from the two languages these are the maximum errors that I observe:

k=9 : 1.8209011484034673e-15
k=9*im : 6.366462912410498e-12

So far as I can tell, the discretization of the unit circle and all other parameters are the same in both Julia and Matlab, however the error for the complex k seems to be unreasonably high. From prior checks, I am fairly sure that the Matlab code is correct, so is there any explanation for why the values differ so drastically? Or a possible fix to this?

Can you take a look at the guidelines here to make sure your example is reproducible? In particular, you didn’t define nn or h, so we can’t reproduce the behavior you’re seeing.


You are right! Sorry about that. I have included the parameters.

You didn’t say which element of G you are comparing to Matlab, or what you think the correct answer is for that element, or how you are doing the comparison.

(For example, for k=9im, the maximum element of G has an absolute value of ≈ 8103.08. An absolute difference of ≈ 6.36646e-12 in that element corresponds to a relative error of about 7.9e-16, or about 3.5ulps, which is quite reasonable for a difference between two calculations of this sort. You really need to look at relative errors to understand accuracy.)


The error I calculated was the following:


Where GM is the corresponding vector calculated from Matlab. So for that entry you calculated the relative error, which looks fine, but for calculations of the sort, shouldn’t the absolute error be smaller than 6.3e-12?

No, that’s not how computer arithmetic works. The absolute error depends on the magnitude of the quantities involved — if you have quantities on the order of 1e4, then you can’t usually expect absolute errors better than 1e4 * eps() ≈ 2e-12. (And usually the errors will be worse than that, because errors accumulate through several calculations.)

Nor is there any reason to think that the errors are bigger in Julia than in Matlab — most likely they are both off by similar amounts (perhaps in different directions).

(If you could report the value of diff, i = findmax(abs.(G-GM)) as well as G[i] and GM[i], this would tell us which value you are concerned about and allow us to compare both the Julia and Matlab results to a more accurate BigFloat calculation.)


Comparing Float64 to BigFloat gives an error of 1.05e-11, which seems to indicate that the error in Matlab is in the same direction as in Julia, but half as big.

(The maximum absolute error is at i = 539, at an angle just past pi/2, although the relative error seems to grow with increasing i.)

Here’s the code that I used to compare:

function G(T, k=9*im)
    nn = 2048 
    h = 2 * T(pi)/nn
    vec = [cos(-T(pi)/2), sin(-T(pi)/2)]
    g = x -> exp(im * Complex{T}(k) * dot(vec, x))
    [g( [cos(i * h), sin(i * h)] ) for i=0:nn-1]

diff, i = findmax( abs.(G(Float64) .- G(BigFloat)) )

Just to mention this: You can get different errors via different matlab versions, too.

No, because for one thing you don’t know that you’re looking at the same element. The element with the biggest difference with BigFloat—the biggest actual error—need not be the same as the element with the biggest difference with Matlab.

(Also, I don’t doubt that there are some elements where the actual error of Julia is bigger than the actual error in the Matlab result…and that there will be elements where the reverse is true. Floating-point errors fluctuate with differences in algorithm etc, and can sometimes be smaller by luck.)

Overall, it looks like the relative errors of the Julia code are all within reasonable bounds here, and I’m sure the same is true of the Matlab code.


True. It could also be that the error in Matlab is 1.5 times that of Julia.

Either way, it would probably be best to compute this in BigFloat if errors of that magnitude are a concern.

I think it’s more likely that there is just a misunderstanding here about floating-point arithmetic. Matlab’s algorithms are not any more accurate for this sort of thing AFAIK, and it is extremely unlikely that their code happens to work only for the precise floating-point errors made by Matlab.

1 Like

Thank you all for responding to this. This clarifies some things! Much appreciated.