I was thinking about how inv is almost always the wrong thing to use in code, and how this primarily trips up people without significant linear algebra experience, and I realized that I think the following will be almost enough to trick beginners into thinking we’re doing what they want rather than what they should want. The biggest missing item is probably printing, where we would actually invert the matrix.
struct InverseMatrix{T <: AbstractMatrix}
mat :: T
end
Base.inv(A :: T) where T = InverseMatrix{T}(A)
Base.inv(A :: InverseMatrix) = A.mat
Base.transpose(A :: InverseMatrix) = InverseMatrix(transpose(A.mat))
Base.:*(A :: InverseMatrix, k :: Number) = InverseMatrix(A/k)
Base.:*(k :: Number, A :: InverseMatrix) = InverseMatrix(A/k)
Base.:*(A :: InverseMatrix, B :: InverseMatrix) = InverseMatrix(B*A)
Base.:*(A :: InverseMatrix, v) = v\A.mat
LinearAlgebra.det(A :: InverseMatrix) = 1/det(A.mat)
Does this seem like a good candidate for putting in Base?
If you’ve done the inverse, you can multiply it by many things cheaply. Without storing the factorization produced by \ you’d end up making it much slower.
yeah, the way I look at it, the approach I’ve highlighted would be worse for a few niche use-cases, but would make the common cases for beginners faster and more accurate. Is there any real world use-case where you would want to index into an inverted matrix?
You almost always want to store the factorization of a matrix, since using that to solve an equation system will be faster if the original matrix is structured somehow, which will allow fast operations such as back-substitution (which can be almost linear with the dimension of the matrix, but this varies).
An inverse will almost always be a dense matrix, and the cost of multiplication of a vector by a dense matrix is quadratic with the dimension.
There are definitely cases where you need the elements of inverse, but I can’t think of one off the top of my head. For certain matrix sizes, dense matrix–vector multiplication can be much faster than specialized factorizations as well, due to fast BLAS kernels, one case that I’ve noticed in particular are BlockBandedMatrices.jl, where solving using a QR factorization (for my cases) is slowed down due to unfavourable block sizes.
If you do OLS under homoskedasticity and want the estimated asymptotic variance of a single element then that would be an individual element from the inverse of a matrix. If you want the estimated asymptotic variance of a linear combination of the OLS estimator then you want multiple (at least four) such elements.
Beginners often use eye and inv to initialize matrices which they later modify values in etc. If it doesn’t behave like a matrix then that would error. One can also use inv if the inverse matrix is placed as a submatrix if a larger matrix, so cat, hcat etc has to work.
But maybe the same approach can be taken as for transpose, an object is returned that only becomes a standard matrix upon copy or an explicit call to Matrix
It does make sense to have both “eager” and “lazy” matrix inversions IMO, similar to Iterators.reverse vs. Base.reverse. The lazy one would hold a factorization and potentially may not implement indexing, just like Iterators.reverse does not provide indexing.
The inverse of a covariance matrix often appears in some areas of statistics as @Joris_Pinkse pointed out. In multivariate mixed linear model, a system of equations must have the inverse of covariance matrix. In quantitative genetics, a typical example is a genomic relationship matrix which is dense and often large, and its inverse is really needed in applications.
I agree that the inverse is often misused to solve the equations, and it is understandable for me to have a lazy system for the inverse. But I wonder if it is a good idea to have such a trick systematically, thinking of our “niche” purpose. I think inv should compute the inverse because it is the purpose of this function.
You make some very good points. However, take Adjoint wrappers - which solve a similar problem - for example: most of the time you don’t want to materialize the adjoint, but for the “niche” cases where you do, one can simply call Matrix on the lazy wrapper. I could see an approach like this being convenient for lazy factorizations. Of course, unlike adjoints, the choice of how one factorizes the matrix under the hood is much more important, but perhaps these API details could be ironed out.
Edit: I see that @baggepinnen has already made this point! Nevertheless, I do rather like this behaviour with Adjoint, but I see the potential downsides here.
My thought here is that something like 80% of performance sensitive uses of inv are uses like inv(A)*B where the person didn’t know that \ exists. 15% are probably people using it to save time when doing many inversions but don’t know about factorizations/ don’t know which to use. The remaining 5% are probably edge cases where actually computing the inverse matrix is needed. As such, I think the Adjoint approach is probably the best bet.
The biggest downside in my mind is that it is somewhat breaking (although probably not for most use-cases). I’ll see if I can make a more detailed approach and maybe make a PR.
The main problem is that the more array wrappers we have, the more nightmarish it becomes to deal with all the method ambiguities that arise in method definitions.
The biggest thing this code is missing is the ability (if desired) for it to store a factorization and use that. One thing that makes the method ambiguity less of a problem is that inv doesn’t nest with transpose or adjoint. As my implementation showed, we can always make Inv be the outermost layer.
Oh jeeze, somehow I forgot that you had already written a demo up top. I had come back to the thread after reading it before so it had been a little bit since I saw your code oops.
The core of LazyArrays.jl is about representing a “concrete” call graph and dispatching to appropriate fused kernels. So, it’s probably the best place to implement the specialized methods suggested in the OP/comments.