# Efficient Minors of a Toeplitz Matrix - Is There Demand?

I have had a use case there I need (the FFT of) the sequence of the primary minors of a square matrix `A`, e.g. schematically `fft([det(A[1:m, 1:m] for m in 1:size(A,1)])`. One would expect naively that this is `O(n^4)` in the size of `A`. In my case, `A` is a Toeplitz matrix. I found it a pleasant surprise that the Levison-Trench-Zohar algorithm (I found a nice summary here ), not only computes a single minor in `O(n^2)`, it implicitly runs iteratively on minors, implying that a single run on `A` recovers the entire minor sequence I need. I have implemented this algorithm and am really pleased with the result.

My question is: would this be of interest for the ecosystem for me to contribute this implementation to, say, `ToeplitzMatrices.jl`? Specifically, the points Iâ€™d like to double-check is:

1. Is it worthwhile to specialize `LinearAlgebra.det` to use this? I infer that this algorithm might not necessarily stable (though positive definiteness is a sufficient condition), so this might not actually be desirable, or maybe a check-and-fallback would be ideal?
2. The Levinson algorithm is already implemented in `StatsBase`, and used by `ToeplitzMatrices`, so perhaps this leads to some duplication in the ecosystem? Yet that implementation is only for solving `A*x=b`, so I donâ€™t see how I can use this implementation for determinants.
3. The real advantage came for my use case of needing the sequence of minors, which would not be exposed by just specializing `det`. Are there any thoughts on a proper interface to expose this idiomatically? One thought is to implement the UDL decomposition in the LTZ algorithm as a new factorization type (unless it already exists), and then read off my minors as the diagonal entries of D.
4. As a first-time contributor, if there is any SOP I should follow.
2 Likes

IMO this probably shouldnâ€™t exist as an optimization for `det`. `det` doesnâ€™t have a lot of optimizations because itâ€™s very rarely a thing you actually want to compute.

It sounds like a potentially nice addition to ToeplitzMatrices.jl (though Iâ€™m not a primary maintainer of that package so my opinion is not dispositive).

As a first-time contributor, if there is any SOP I should follow.

File a github pull request with your new function/method. Make sure it is documented and has a test.

2 Likes

@stevengj For completion, is there a â€śstandard conventionâ€ť on what to do in case of ill-conditioned input? Silently return garbage? Throw `DomainError`? Under the hood switch to the general `det` implementation? (With `@info`?) I noticed things like `inv` of a singular matrix return an `LAPACKException` which I donâ€™t find very informative.

What do you mean by ill conditioning in this context? An ill-conditioned matrix still has a perfectly well defined determinant. Of course, you canâ€™t necessarily achieve a small relative error in this case (e.g. if the exact determinant is `0.0` and you return `1e-16`, thatâ€™s a relative error of infinity), so in that sense the determinant has a large relative condition number, but you should still be able to achieve a small error relative to other relevant scales like the norm^n of the matrix.

Itâ€™s also a matter of documentation and context. If there is a regime in which you know your algorithm gives inaccurate results, the ideal thing (besides improving your algorithm) is to switch to another algorithm that is more accurate in that case. But suppose that such an algorithm doesnâ€™t exist, or is vastly more expensive (e.g. requires arbitrary precision arithmetic); in that case, depending on your expected users, you might silently return garbage but include a warning in the documentation, or throw an error (if you can identify when garbage results will occur), or do both (throw an error, but have a flag like `check=false` in the `lu` function that disables the error).

I meant a matrix thatâ€™s not positive-definite, which is a pre-requisite for the LTZ algorithm.

The algorithm implementation naturally can identify if this occurs: it successively computes the minorsâ€™ determinant, and then one would be zero (or test to be below a threshold). (The failure arises from a division by the latest minor determinant at each iteration.)

The flag option seems good, but differs from the semantics of `det`. It perhaps more strongly motivates an implementation of a UL factorization with LTZ, and then `det` of the factorization is trivial. [Side bar: are UL factorizations supported in Julia? My limited understanding is that the algorithm cannot be reformulated to produce an LU factorization.]

Not directly, as far as I know, because they arenâ€™t implemented in LAPACK. But you can get the UL factorization by doing LU factorization of a permuted matrix.