Has anyone noticed numerical instability in Base.LinAlg.cholfact or Base.LinAlg.ishermitian?

I have been using some stuff from Distributions.jl and ConjugatePriors.jl to (amongst other things) sample from a NormalWishart distribution. I am getting a whole bunch of problems that trace back to Base.LinAlg.ishermitian detecting a matrix that is definitely Hermitian as being asymmetric. When I investigate it I see something like:

julia> A
2×2 Array{Float64,2}:
0.113637 -0.0338744
-0.0338744 0.0124418

julia> Hermitian(A)
2×2 Hermitian{Float64,Array{Float64,2}}:
0.113637 -0.0338744
-0.0338744 0.0124418

julia> A-Hermitian(A)
2×2 Array{Float64,2}:
0.0 0.0
-6.93889e-18 0.0

julia> ishermitian(A)
false

Are there no checks on relative error in ishermitian? This seems like something that would cause problems elsewhere, and be relatively simple to fix.

The solution is to wrap your matrix in Hermitian or Symmetric if you want Julia to ignore that fact that your matrix is not symmetric. Alternatively, you can make it exactly symmetric by (A+A')/2 or LinAlg.copytri!. We have decided not to use a tolerance. Please search the issue tracker if you are interested in the details of that discussion.

The issue is that the matrix is symmetric, but is being detected as asymmetric due to numerical instability. I would definitely be interested in seeing the discussion, but when I looked I never really saw a good answer for why there is no relative tolerance implemented. I know it was discussed in e.g. Issue #10298 and Issue#10369 but I didn’t see much in terms of a consensus for why not to implement a relative tolerance. There was a brief discussion, but it mostly seemed to boil down to “well, MATLAB doesn’t”, and “what tolerance would we pick?”. I don’t really see anything in the discussion that gives a good reason why a relative tolerance of, say, 1e-16 would cause problems. Is there something I missed?

I do know that I can use something like cholfact(Hermitian(A)) when writing my own code, but that doesn’t always help when I am relying on other packages that I cannot edit directly. In this case I think I can make it work, but it seems like something that could lead to a lot of problems or unnecessary effort spent on workarounds where there doesn’t need to be.

As an example: the idea behind a tolerance in rank estimation of a matrix is to account for the errors introduced by the procedure that determines the rank. This procedure is typically an SVD. The smallest computed singular value of a rank deficient matrix will typically not be zero because of rounding errors and the tolerance adjusts for that. However, there is absolutely no rounding error when checking a matrix for symmetry so the motivation from rank estimation doesn’t carry over. Instead, a tolerance would have to be based on an assumption of the computation before the check for symmetry. It is impossible to know the magnitude of the rounding errors introduced before the call to ishermitian so therefore we don’t use a tolerance.

Again, your matrix is not symmetric. Probably you mean that if it had been computed in infinite precision then it would have been symmetric but in fact it was computed with floating point arithmetic which has introduced rounding errors. To enforce the structural symmetry you’d have to use the Symmetric or Hermitian structs. If a package you are using doesn’t work with the wrappers then I’d suggest that you reach out the maintainers or the packages or, even better, fix the issue with a pull request.

5 Likes