# Wrong output from det(Array{Complex})

I have a complex matrix

`A = Array{Complex32}([20-50im -10+20im -10+30im; -10.0+20.0im 26.0-52.0im -16.0+32.0im; -10.0+30.0im -16.0+32.0im 26.0-62.0im])`

Its determinant is 0, but when I call

``````det(A) #out=-116.25 - 5.00im, should be 0
det(real(A)) #out=0.0, correct
det(imag(A)) #put=-68.75, should also be 0
``````

Strangely, when I promote the imaginary part, it gives the correct result

`det(Array{Real}(imag(A))) # outputs 0.0`

I suppose it has to do with the precision used to create A (Complex32). When I create it using Complex128, the determinant of the real and imaginary part are correct, but not of the matrix as a whole.

``````A = Array{Complex64}([20-50im -10+20im -10+30im; -10.0+20.0im 26.0-52.0im -16.0+32.0im; -10.0+30.0im -16.0+32.0im 26.0-62.0im])
det(A) #-0.0074005127 + 0.006790161im
det(real(A)) #0.0
det(imag(A)) #0.0
``````

My question is, why is it that det(A) gives the wrong result (error too big, that is) and how can I impove it?

Note that `Complex32` stores its real and imaginary components as `Float16`s (so that the total size of a `Complex32` is 32 bytes). `Float16` is a really small floating point type, so large errors can be expected. For `Complex128`, the determinant is `-4.654054919228656e-11 + 4.618527782440645e-12im`, which isnât too bad. Floating point math is just different from math with real numbers, so you may have to lower your expectations a bit when using them. See also this excellent post: PSA: floating-point arithmetic - #5 by Tamas_Papp.

If you really need a lot of precision (at the cost of performance), you could use `Complex{BigFloat}`. If you want to go that route, note that something like `26.0-52.0im` constructs a `Complex{Float64}`, and if the number you want isnât exactly representable as a `Complex{Float64}`, youâll lose precision already before converting to `BigFloat`. Alternatively, in your specific case, the real and imaginary parts are integers, so if you replace e.g. `26.0-52.0im` with `26-52im` and construct an `Array{BigFloat}`, you wonât lose precision.

2 Likes

First of all, you should understand that accurate computation of the determinant is a notoriously difficult problem. See, for example,

In particular, that paper says that the condition number of `det(A)` for an nĂn matrix `A` is on the order of `norm(A, Inf)^n`. For your matrix, that gives about 2.44Ă10â¶. So, I would naively expect that computing determinants of your matrix could easily lose 6 digits to roundoff errors, which means that for `Float16` arithmetic (which keeps 3 digits) I would expect to lose everything, and even `Float32` arithmetic (which keeps about 7 digits) is problematic.

`det(A)` works by performing the LU factorization of `A` and then multiplying the diagonal entries of `U`. This is efficient for large n, O(nÂł), but it incurs roundoff errors even for integer-valued matrix entries.

For a 3Ă3 matrix, you can instead use the direct permutation formula for the determinant, which is exact for integer values that arenât too big. This is built-in to the StaticArrays.jl package (it is a good idea to use StaticArrays anyway, for efficiency, if you are working with fixed size â€ 4Ă4 matrices):

``````julia> using StaticArrays

julia> A = @SMatrix Complex{Float32}[20-50im -10+20im -10+30im; -10.0+20.0im 26.0-52.0im -16.0+32.0im; -10.0+30.0im -16.0+32.0im 26.0-62.0im];

julia> det(A)
0.0f0 + 0.0f0im
``````

Even with `SMatrix`, `Float16` wonât work because the individual terms in the determinant formula overflow the maximum `Float16` value. But it is really inadvisable to do computations with `Float16` anyway â `Float32` or `Float64` are much faster. `Float16` is mainly useful for low-precision storage of the result.

All this being said, you should really think carefully about whether you need to compute the determinant at all. It is rarely used nowadays in numerical computations â there is often a better way to do whatever you wanted the determinant for.

7 Likes

In addition to the excellent suggestions of @tkoolen and @stevengj, I would look at the context: why do you need determinants? If you are exploring the stucture of a matrix (eg rank), QR, or ideally SVD are usually better options for numerical linear algebra. Using 64 bit floats (ie `Complex{Float64}` in your example) are usually worth it.

Actually, I do not need the determinant for my problem. I was just toying with julia. But I do need to solve a big linear algebra problem, though I just need approximate results (any error < 1e-2 is acceptable).

Depending on the problem, iterative methods may be a good choice for large problems in need of a âgood enoughâ solution. Try
https://github.com/JuliaMath/IterativeSolvers.jl

If you want to solve Ax=b, look at the accuracy of `A \ b` (which depends on the conditioning of A), not determinants.

Whether to do something fancier like iterative solvers depends on what you mean by âbigâ. (1000x1000 is not big, for example.)

Are you sure? I think 32 stands for the number of bits in both the real and imaginary parts.

``````julia> dump(Complex32)
Complex{Float16} <: Number
re::Float16
im::Float16
``````

Right, one letter difference: I confused it with `ComplexF32`âŠ

Ah, I guess this will be changed in 0.7 (`ComplexF32` is not a thing in 0.6):

``````julia> ComplexF32
Complex{Float32}

julia> Complex32
WARNING: Base.Complex32 is deprecated, use ComplexF16 instead.
in module Main
Complex{Float16}
``````

I think this has long been a source of ambiguity and confusion.

See discussion and PR:

1 Like