Oh, that’s some fancy stuff. Thanks for linking!
In addition to the comments that addresses various specific caveats, generally I don’t think it is a good idea for languages/libraries to try to pretend to know what the users want, and silently override.
It is true that inv
is not needed 99% of the time, but occasionally it is needed and then I want to be able to access it without any hassle.
While I sympathize with the difficulty of users who were not exposed to basic caveats of numerical linear algebra (or, for that matter, floating point computation), we simply cannot pad every sharp corner of scientific computing. Some investment is required.
Matrix factorizations from LinearAlgebra package are already implemented so they behave like inverses.
I think the better aproach would be to put a note into documentation of inv
function that in most cases using matrix factorization is better than using inverse.
A rule for linters, that check for inv use would be also nice. That way users can learn, why not to use inverse.
I have been playing around with a lazy inverse type for a while and believe that it has merit in its own right. As some have previously mentioned, seeing it as a replacement for inv
also leads to problems. Rather, I’ve noticed that it can make certain implementations easier and enable others.
For example, some implementations of higher level linear algebra types like the Woodbury identity and Kronecker products define matrix multiplication and matrix solves separately, or store a dense inverse. However, using a lazy inverse, it is possible to define the solve as a multiplication of lazy inverses, without the need for any more code. Using my sketch for a lazy inverse here, I have written prototypes for the Woodbury identity (written with lazy inverse) and Kronecker products (written with lazy inverse) with this philosophy. Lazy inverses can also have factorization objects as their “parent” for efficiency and stability.
Further, it could have applications in defining Zygote adjoints for linear algebra operations. For example, the adjoint for logdet
and inv
here calls the dense inverse, which is correct though very likely not what should be done for the sake of numerical stability. A lazy inverse at this place could postpone the evaluation until a regular matrix solve can be used.
I’ve also encountered other applications for lazy inverses but believe the above are the most interesting.
Indeed.
Also „starting with a suboptimal approach“ is part of the learning process (it could be suboptimal conceptually or in view of language features).
After all discourse and other forums are full of examples where someone posts a slow snippet which magically becomes 10 times faster after a few people chimed in.
So I think inv has its place. If people want to use it, let them.