Wrong output from det(Array{Complex})

linearalgebra

#1

I have a complex matrix

A = Array{Complex32}([20-50im -10+20im -10+30im; -10.0+20.0im 26.0-52.0im -16.0+32.0im; -10.0+30.0im -16.0+32.0im 26.0-62.0im])

Its determinant is 0, but when I call

det(A) #out=-116.25 - 5.00im, should be 0
det(real(A)) #out=0.0, correct
det(imag(A)) #put=-68.75, should also be 0

Strangely, when I promote the imaginary part, it gives the correct result

det(Array{Real}(imag(A))) # outputs 0.0

I suppose it has to do with the precision used to create A (Complex32). When I create it using Complex128, the determinant of the real and imaginary part are correct, but not of the matrix as a whole.

A = Array{Complex64}([20-50im -10+20im -10+30im; -10.0+20.0im 26.0-52.0im -16.0+32.0im; -10.0+30.0im -16.0+32.0im 26.0-62.0im])
det(A) #-0.0074005127 + 0.006790161im
det(real(A)) #0.0
det(imag(A)) #0.0

My question is, why is it that det(A) gives the wrong result (error too big, that is) and how can I impove it?


#2

Note that Complex32 stores its real and imaginary components as Float16s (so that the total size of a Complex32 is 32 bytes). Float16 is a really small floating point type, so large errors can be expected. For Complex128, the determinant is -4.654054919228656e-11 + 4.618527782440645e-12im, which isn’t too bad. Floating point math is just different from math with real numbers, so you may have to lower your expectations a bit when using them. See also this excellent post: PSA: floating-point arithmetic.

If you really need a lot of precision (at the cost of performance), you could use Complex{BigFloat}. If you want to go that route, note that something like 26.0-52.0im constructs a Complex{Float64}, and if the number you want isn’t exactly representable as a Complex{Float64}, you’ll lose precision already before converting to BigFloat. Alternatively, in your specific case, the real and imaginary parts are integers, so if you replace e.g. 26.0-52.0im with 26-52im and construct an Array{BigFloat}, you won’t lose precision.

See also: Accuracy of Computing Determinants.


#3

First of all, you should understand that accurate computation of the determinant is a notoriously difficult problem. See, for example,

In particular, that paper says that the condition number of det(A) for an n×n matrix A is on the order of norm(A, Inf)^n. For your matrix, that gives about 2.44×10⁶. So, I would naively expect that computing determinants of your matrix could easily lose 6 digits to roundoff errors, which means that for Float16 arithmetic (which keeps 3 digits) I would expect to lose everything, and even Float32 arithmetic (which keeps about 7 digits) is problematic.

det(A) works by performing the LU factorization of A and then multiplying the diagonal entries of U. This is efficient for large n, O(n³), but it incurs roundoff errors even for integer-valued matrix entries.

For a 3×3 matrix, you can instead use the direct permutation formula for the determinant, which is exact for integer values that aren’t too big. This is built-in to the StaticArrays.jl package (it is a good idea to use StaticArrays anyway, for efficiency, if you are working with fixed size ≤ 4×4 matrices):

julia> using StaticArrays

julia> A = @SMatrix Complex{Float32}[20-50im -10+20im -10+30im; -10.0+20.0im 26.0-52.0im -16.0+32.0im; -10.0+30.0im -16.0+32.0im 26.0-62.0im];

julia> det(A)
0.0f0 + 0.0f0im

Even with SMatrix, Float16 won’t work because the individual terms in the determinant formula overflow the maximum Float16 value. But it is really inadvisable to do computations with Float16 anyway — Float32 or Float64 are much faster. Float16 is mainly useful for low-precision storage of the result.

All this being said, you should really think carefully about whether you need to compute the determinant at all. It is rarely used nowadays in numerical computations — there is often a better way to do whatever you wanted the determinant for.


#4

In addition to the excellent suggestions of @tkoolen and @stevengj, I would look at the context: why do you need determinants? If you are exploring the stucture of a matrix (eg rank), QR, or ideally SVD are usually better options for numerical linear algebra. Using 64 bit floats (ie Complex{Float64} in your example) are usually worth it.


#5

Thank you for your reply,

Actually, I do not need the determinant for my problem. I was just toying with julia. But I do need to solve a big linear algebra problem, though I just need approximate results (any error < 1e-2 is acceptable).


#6

Depending on the problem, iterative methods may be a good choice for large problems in need of a “good enough” solution. Try


#7

If you want to solve Ax=b, look at the accuracy of A \ b (which depends on the conditioning of A), not determinants.

Whether to do something fancier like iterative solvers depends on what you mean by “big”. (1000x1000 is not big, for example.)


#8

Are you sure? I think 32 stands for the number of bits in both the real and imaginary parts.


#9
julia> dump(Complex32)
Complex{Float16} <: Number
  re::Float16
  im::Float16

#10

Right, one letter difference: I confused it with ComplexF32


#11

Ah, I guess this will be changed in 0.7 (ComplexF32 is not a thing in 0.6):

julia> ComplexF32
Complex{Float32}

julia> Complex32
WARNING: Base.Complex32 is deprecated, use ComplexF16 instead.
 in module Main
Complex{Float16}

#12

I think this has long been a source of ambiguity and confusion.

See discussion and PR: