The LinearAlgebra module in the standard library has several special matrix types which represent various sparse matrix types, like symmetric or upper-triangular. This allows for reduced memory usage as you can avoid storing the same number twice or storing a bunch of zeros. It also allows for faster calculations in some cases by skipping multiplication by zero, and I think in some cases reusing the result of a multiplication?

I was surprised to find that LinearAlgebra does not include an antisymmetric matrix type (sometimes called skew-symmetric). I thought that was a pretty common special matrix type. So my question is what’s the idiomatic way to deal with antisymmetric matrices in Julia? Should I just make a dense matrix and not worry about it? Could / should I make my own sparse antisymmetric matrix implementation? (I didn’t see an existing implementation, but I didn’t look that hard either.)

Well, disregarding the restriction of this advice to optimization problems, it only allows for symmetric matrix variables and not symmetrix coefficient matrices. I cannot see how this could be use for solving, say, Ax=b for x\in\mathbb{R}^n while exploiting an antisymmetry of the coefficient matrix A.

There aren’t that many algorithms or software libraries to take advanage of anti-symmetric matrices; they seem to be a fairly specialized corner of numerical linear algebra. (None in LAPACK as far as I know.)

The one area where it would be nice to have an anti-symmetric matrix type, probably in a package, is for real-antisymmetric eigenvalue problems, which have nice properties (purely imaginary eigenvalues, orthogonal eigenvectors). You can turn them into Hermitian eigenproblems by multiplying by i, but then you pay the price of a complex matrix. (See also these references and the rest of that discussion, and also this paper.)

There are also Cholesky-like factorizations for solving Ax=b with anti-symmetric A, but it might take a lot of optimization effort for them to be competitive with the highly optimized LU factorization in LAPACK.

(A basic challenge here is that naive 3-loop implementations of dense-matrix algorithms tend to be 50x slower than optimized code like LAPACK with a fast BLAS. So you need to get pretty serious about software engineering for an antisymmetric algorithm to be worth it compared to just multiplying by i and using the complex-Hermitian routines.)