This question is not fully related to optimization (though I am using it for optimization); but a quick search on the Julia forum hinted me that such questions go to the optimization tag.

I can generate PSD matrices with:

n = 10
Q = randn(n,n); # random iid sample Gaussian elements
Q = Q'*Q; #PSD matrix generation
Q = (Q + Q')/2; # to ensure symmetry (sometimes numerical issues break it)

Sometimes the minimum eigenvalue of such generated Q is something like -10^{-8} and my solvers think this is not PSD. Typically adding a small diagonal matrix of 10^{-6} would be sufficient, but in this example, that does not save some instances. I wanted to check if there is any systematic sparse PSD matrix generation.

Edit: I fixed the issue by controlling the eigenvalues:

The fixed version is generating well conditioned positive definite matrices, not rank deficient semidefinite matrices. That seems like it wouldn’t give you what you want if you really want semidefinite matrices that could be rank deficient.

In the second example, the sparsification is introducing rank deficiency in Q in some cases, depending on what gets set to zero. So you do get semidefinite matrices sometimes. But what it means for a matrix to be semidefinite in the presence of rounding is a bit tricky.

Even if your matrix really is truly rank deficient semidefinite, which is hard to do in this way with rounding unless the sparsity structure of Q^TQ enforces rank deficiency, an eigenvalue computation will at best give you the eigenvalues of Q+E_1 where \|E_1\| can be bounded in terms of the unit roundoff and \|Q\|. So you can have a PSD matrix Q and still compute a small negative eigenvalue of Q+E_1. (Although -10^{-8} seems too large in magnitude unless you did some 8 digit rounding as in your fixed example; -10^{-16} is closer to what I might expect in double precision). And even if your matrix is PSD and you compute nonnegative eigenvalues, if it’s nearly rank deficient, whatever code you are using afterward might also solve some problem for a different perturbed matrix Q+E_2 that is not positive semidefinite. So you will still run into trouble potentially. In practice, when rounding is involved, being to close to an indefinite matrix can be just as bad as being indefinite.

Of course if you are fine with generating well conditioned positive definite matrices, your fixed example works. Otherwise, modifying Q with a diagonal shift can work in some contexts. Or maybe the code you are feeding into needs to be more robust in handling matrices with small negative eigenvalues. There’s no generally applicable rule for this sort of problem. It depends on the details of what you are trying to do with Q.

Wait, what? I’m not a Blas/Lapack pro, but to me this seems like a bug in Julia base. The operation Q' * Q should map to a BLAS syrk, which is guaranteed to preserve symmetry because it computes off-diagonal values only once and then mirrors them. It works like that in Matlab.

If the results do not come up symmetric, this suggests that it’s mapping to gemm or symm instead, and it’s doing twice as many flops as necessary.

The debugger shows that it eventually calls syrk. That’s nice, but I’m not sure I would have assumed that it would do that or something comparable now and forever unless I saw it documented somewhere. So I might still be tempted to do something like that if I was really depending on exact symmetry.

As an aside, note that this is statistically equivalent to

Q = sprand(n, n, 0.5, randn)

which is much more efficient if you go to larger matrices and lower probabilities (so that sparse-matrix data structures become beneficial). Of course, in this case it’s even more likely to have zero eigenvalues (which roundoff errors may make slightly negative).

I remember very specifically yesterday that without the sum / 2 trick the symmetry was not recognized. I will have a look again – I hope I am not misleading anyone.

No, syrk does not do such a mirror/copy (at least in the reference implementation), although in this context (Q'*Q) a Julia wrapper does. The BLAS convention is that only either upper or lower triangular elements of the result are defined, so no copy is needed. Whereas symmetry is unsafely defined by contract in C or Fortran BLAS, it could be enforced in Julia by the type system, viz. returning Symmetric in this context, also avoiding the copy. @stevengj do you know if that option is unwise or has been discussed somewhere?