det(A) #out=-116.25 - 5.00im, should be 0
det(real(A)) #out=0.0, correct
det(imag(A)) #put=-68.75, should also be 0

Strangely, when I promote the imaginary part, it gives the correct result

det(Array{Real}(imag(A))) # outputs 0.0

I suppose it has to do with the precision used to create A (Complex32). When I create it using Complex128, the determinant of the real and imaginary part are correct, but not of the matrix as a whole.

Note that Complex32 stores its real and imaginary components as Float16s (so that the total size of a Complex32 is 32 bytes). Float16 is a really small floating point type, so large errors can be expected. For Complex128, the determinant is -4.654054919228656e-11 + 4.618527782440645e-12im, which isnât too bad. Floating point math is just different from math with real numbers, so you may have to lower your expectations a bit when using them. See also this excellent post: PSA: floating-point arithmetic - #5 by Tamas_Papp.

If you really need a lot of precision (at the cost of performance), you could use Complex{BigFloat}. If you want to go that route, note that something like 26.0-52.0im constructs a Complex{Float64}, and if the number you want isnât exactly representable as a Complex{Float64}, youâll lose precision already before converting to BigFloat. Alternatively, in your specific case, the real and imaginary parts are integers, so if you replace e.g. 26.0-52.0im with 26-52im and construct an Array{BigFloat}, you wonât lose precision.

In particular, that paper says that the condition number of det(A) for an nĂn matrix A is on the order of norm(A, Inf)^n. For your matrix, that gives about 2.44Ă10â¶. So, I would naively expect that computing determinants of your matrix could easily lose 6 digits to roundoff errors, which means that for Float16 arithmetic (which keeps 3 digits) I would expect to lose everything, and even Float32 arithmetic (which keeps about 7 digits) is problematic.

det(A) works by performing the LU factorization of A and then multiplying the diagonal entries of U. This is efficient for large n, O(nÂł), but it incurs roundoff errors even for integer-valued matrix entries.

For a 3Ă3 matrix, you can instead use the direct permutation formula for the determinant, which is exact for integer values that arenât too big. This is built-in to the StaticArrays.jl package (it is a good idea to use StaticArrays anyway, for efficiency, if you are working with fixed size â€ 4Ă4 matrices):

Even with SMatrix, Float16 wonât work because the individual terms in the determinant formula overflow the maximum Float16 value. But it is really inadvisable to do computations with Float16 anyway â Float32 or Float64 are much faster. Float16 is mainly useful for low-precision storage of the result.

All this being said, you should really think carefully about whether you need to compute the determinant at all. It is rarely used nowadays in numerical computations â there is often a better way to do whatever you wanted the determinant for.

In addition to the excellent suggestions of @tkoolen and @stevengj, I would look at the context: why do you need determinants? If you are exploring the stucture of a matrix (eg rank), QR, or ideally SVD are usually better options for numerical linear algebra. Using 64 bit floats (ie Complex{Float64} in your example) are usually worth it.

Actually, I do not need the determinant for my problem. I was just toying with julia. But I do need to solve a big linear algebra problem, though I just need approximate results (any error < 1e-2 is acceptable).