# Eigenvalues are approximately correct, not exactly

Hi everybody,

``````using LinearAlgebra
eigvals([0. 1im;1im 0.])
``````

gives:

``````2-element Vector{ComplexF64}:
0.0 + 0.9999999999999997im
2.7755575615628914e-17 - 1.0im
``````

which is correct but with approximation. I am new to Julia and wondering if anything I’m missing or not. The question actually is how eigvals actually works.
Thank you all

eigvals is an iterative method, even for problems this small. So it seesm that you got something reasonable.

1 Like

When dealing with floats, you should in general expect answers that are “correct but with approximation”, because of the way floating numbers are stored internally.

As another illustration of this, consider

``````julia> .1 + .2
0.30000000000000004
``````
3 Likes

Also, if you need more precision, you can use `BigFloat`s.

``````julia> eigvals([big(0) 1im; 1im 0])
2-element Vector{Complex{BigFloat}}:
0.0 - 1.0im
0.0 + 1.0im
``````
1 Like
2 Likes

Thank you all for the answers. Sorry my question was kind of naive as my background is not CS.
Just another question to me, maybe you put a comment on it, is: I am doing a matrix computation in which I am using FLoops.jl to parallel the computation. Here what I get from @floop and without @floop are the same only if I round up to `round.(A,digits=14)`. @floop really works here for me as it speeds up the computation 10x faster. But can you explain this and reduce this to the previous question.

Floating point round-off depends on the order of operations. Doing calculations in parallel vs sequentially will change the order of operations.

2 Likes