Should `inv` be lazy?

I was thinking about how inv is almost always the wrong thing to use in code, and how this primarily trips up people without significant linear algebra experience, and I realized that I think the following will be almost enough to trick beginners into thinking we’re doing what they want rather than what they should want. The biggest missing item is probably printing, where we would actually invert the matrix.

struct InverseMatrix{T <: AbstractMatrix}
    mat :: T
end

Base.inv(A :: T) where T = InverseMatrix{T}(A)
Base.inv(A :: InverseMatrix) = A.mat

Base.transpose(A :: InverseMatrix) = InverseMatrix(transpose(A.mat))

Base.:*(A :: InverseMatrix, k :: Number) = InverseMatrix(A/k)
Base.:*(k :: Number, A :: InverseMatrix) = InverseMatrix(A/k)

Base.:*(A :: InverseMatrix, B :: InverseMatrix) = InverseMatrix(B*A)

Base.:*(A :: InverseMatrix, v) = v\A.mat
LinearAlgebra.det(A :: InverseMatrix) = 1/det(A.mat)

Does this seem like a good candidate for putting in Base?

5 Likes

If you’ve done the inverse, you can multiply it by many things cheaply. Without storing the factorization produced by \ you’d end up making it much slower.

3 Likes

yeah, it would probably be a good idea to make the first multiplication with a vector or matrix compute and store a factorization.

1 Like

You’d have to compute the inverse if you index into it also

1 Like

yeah, the way I look at it, the approach I’ve highlighted would be worse for a few niche use-cases, but would make the common cases for beginners faster and more accurate. Is there any real world use-case where you would want to index into an inverted matrix?

You almost always want to store the factorization of a matrix, since using that to solve an equation system will be faster if the original matrix is structured somehow, which will allow fast operations such as back-substitution (which can be almost linear with the dimension of the matrix, but this varies).

An inverse will almost always be a dense matrix, and the cost of multiplication of a vector by a dense matrix is quadratic with the dimension.

There are definitely cases where you need the elements of inverse, but I can’t think of one off the top of my head. For certain matrix sizes, dense matrix–vector multiplication can be much faster than specialized factorizations as well, due to fast BLAS kernels, one case that I’ve noticed in particular are BlockBandedMatrices.jl, where solving using a QR factorization (for my cases) is slowed down due to unfavourable block sizes.

2 Likes

If you do OLS under homoskedasticity and want the estimated asymptotic variance of a single element then that would be an individual element from the inverse of a matrix. If you want the estimated asymptotic variance of a linear combination of the OLS estimator then you want multiple (at least four) such elements.

4 Likes

Beginners often use eye and inv to initialize matrices which they later modify values in etc. If it doesn’t behave like a matrix then that would error. One can also use inv if the inverse matrix is placed as a submatrix if a larger matrix, so cat, hcat etc has to work.

But maybe the same approach can be taken as for transpose, an object is returned that only becomes a standard matrix upon copy or an explicit call to Matrix

That makes a lot of sense to me. How hard would it be to implement?

Probably not very hard, but it’s a breaking change so it can’t be done until 2.0

It does make sense to have both “eager” and “lazy” matrix inversions IMO, similar to Iterators.reverse vs. Base.reverse. The lazy one would hold a factorization and potentially may not implement indexing, just like Iterators.reverse does not provide indexing.

2 Likes

The inverse of a covariance matrix often appears in some areas of statistics as @Joris_Pinkse pointed out. In multivariate mixed linear model, a system of equations must have the inverse of covariance matrix. In quantitative genetics, a typical example is a genomic relationship matrix which is dense and often large, and its inverse is really needed in applications.

I agree that the inverse is often misused to solve the equations, and it is understandable for me to have a lazy system for the inverse. But I wonder if it is a good idea to have such a trick systematically, thinking of our “niche” purpose. I think inv should compute the inverse because it is the purpose of this function.

5 Likes

One could argue it’s the universities responsibility to educate scientists and engineers on numerical computing and not a programming language’s. :joy:

1 Like

You make some very good points. However, take Adjoint wrappers - which solve a similar problem - for example: most of the time you don’t want to materialize the adjoint, but for the “niche” cases where you do, one can simply call Matrix on the lazy wrapper. I could see an approach like this being convenient for lazy factorizations. Of course, unlike adjoints, the choice of how one factorizes the matrix under the hood is much more important, but perhaps these API details could be ironed out.

Edit: I see that @baggepinnen has already made this point! Nevertheless, I do rather like this behaviour with Adjoint, but I see the potential downsides here.

1 Like

My thought here is that something like 80% of performance sensitive uses of inv are uses like inv(A)*B where the person didn’t know that \ exists. 15% are probably people using it to save time when doing many inversions but don’t know about factorizations/ don’t know which to use. The remaining 5% are probably edge cases where actually computing the inverse matrix is needed. As such, I think the Adjoint approach is probably the best bet.
The biggest downside in my mind is that it is somewhat breaking (although probably not for most use-cases). I’ll see if I can make a more detailed approach and maybe make a PR.

1 Like

It’s pretty easy to hack up an example:

struct Inv{T, A <: AbstractArray{T, 2}} <: AbstractMatrix{T}
    data::A
    Inv(arr::AbstractMatrix{T}) where {T} = new{T, typeof(arr)}(arr)
end

Base.collect(Minv::Inv) = collect(inv(Minv.data))
Base.collect(Minv::Inv{<:Matrix}) = inv(Minv.data)

Base.size(Minv::Inv) = size(Minv.data)
Base.getindex(Minv::Inv, args...) = getindex(collect(Minv), args...) 

Base.inv(Minv::Inv) = Minv.data

Base.:(*)(Minv::Inv, b::AbstractVector) = Minv.data \ b
Base.:(*)(a::AbstractVector, Minv::Inv) = a / Minv.data

Base.:(*)(Minv::Inv, b::AbstractMatrix) = Minv.data \ b
Base.:(*)(a::AbstractMatrix, Minv::Inv) = a / Minv.data

Base.:(/)(Minv::Inv, b::AbstractMatrix) = Minv.data * b
Base.:(/)(a::AbstractMatrix, Minv::Inv) = a * Minv.data

Base.:(/)(Minv::Inv, b::AbstractVector) = Minv.data \ b
Base.:(/)(a::AbstractVector, Minv::Inv) = a / Minv.data
Inv([1 2; 3 4])

#+RESULTS:
: 2×2 Inv{Int64,Array{Int64,2}}:
:  -2.0   1.0
:   1.5  -0.5
A = rand(100, 100)
B = rand(100, 100)
@btime Inv($A) * $B
@btime inv($A) * $B;

#+RESULTS:
:   351.567 μs (5 allocations: 157.28 KiB)
:   519.155 μs (7 allocations: 207.44 KiB)

The main problem is that the more array wrappers we have, the more nightmarish it becomes to deal with all the method ambiguities that arise in method definitions.

5 Likes

The biggest thing this code is missing is the ability (if desired) for it to store a factorization and use that. One thing that makes the method ambiguity less of a problem is that inv doesn’t nest with transpose or adjoint. As my implementation showed, we can always make Inv be the outermost layer.

Oh jeeze, somehow I forgot that you had already written a demo up top. I had come back to the thread after reading it before so it had been a little bit since I saw your code :man_facepalming: oops.

No problem. Yours has a couple good things that mine missed.

FYI you can use @~ inv(A) or equivalently Inv(A) from https://github.com/JuliaArrays/LazyArrays.jl. Or LazyArray(@~ inv(A)), if you want turn this into a legit array.

The core of LazyArrays.jl is about representing a “concrete” call graph and dispatching to appropriate fused kernels. So, it’s probably the best place to implement the specialized methods suggested in the OP/comments.

5 Likes