Dear all,
I might be wrong, but in the plethora of BLAS and LAPACK functions, I was not able to find a function of the type
inv(dest, X)
where X is the matrix to invert and dest is the destination.
Any suggestion?
Dear all,
I might be wrong, but in the plethora of BLAS and LAPACK functions, I was not able to find a function of the type
inv(dest, X)
where X is the matrix to invert and dest is the destination.
Any suggestion?
Are you looking for
dest = inv(X)
?
You may be able to use
Base.LinAlg.LAPACK.getri!
but note that it overwrites the argument, so you may want to copy that first. I am not aware of a LAPACK routine that puts the inverse or the factorization in another matrix, they usually overwrite when that makes sense.
You can use A_ldiv_B!(A, X)
after initializing the contents of X
to the identity matrix.
However, I would first make sure that you really want to compute the inverse matrix. Most of the time, you should just compute LU = lufact(A)
(or lufact!
to do it in-place) — almost anything you might want to do with the inverse matrix can be done about as efficiently with the LU factorization, which saves you the added expense of computing the inverse.
julia> N=3; dest = zeros(N,N); p1 = pointer(dest); dest=inv(rand(N,N)); p2 = pointer(dest); p1==p2
false
It is not what I’m looking for as it creates a copy.
As the matrix to be inverted is posdef the following (thanks to @carlobaldassi)
function inplaceinverse!(dest,source)
copy!(dest, source)
Base.LinAlg.inv!(cholfact!(dest))
end
suits my needs. Thanks to @stevengj for his suggestion but unfortunately I need the diagonal of the inverse of a posdef matrix. Maybe, but this is a question on linear algebra, there is a smarter strategy to obtain the diagonal of the inverse of a posdef matrix in a faster and less memory requiring way.
That sometimes happens when the positive definite matrix was obtained from some other matrix, and you work with the latter directly, eg using a factorization.
Would it make sense to have an inv!(dest, X)
function in Base?
There are iterative methods for this, but they are most beneficial when your matrix is sparse (or has some other special structure that lets you compute matrix × vector quickly).
See also this discussion: matrices - Find diagonal of inverse matrix - Mathematics Stack Exchange … with direct methods for dense matrices like Cholesky, it doesn’t seem like you can save more than a factor of two compared to computing the whole inverse matrix, and in practice it will probably be less than that because of the difficulty of beating BLAS.
For an m×m SPD matrix A
, pre-allocated matrices Acopy
and X
of the same size, and a preallocated array d
of length m
, a straightforward and fast way to extract the diagonal of the inverse in-place is:
Acopy .= A # make a copy if you don't want to overwrite A
X .= 0
for i = 1:m; X[i,i] = 1; end # set X to identity matrix
C = cholfact!(Acopy)
A_ldiv_B!(C, X)
for i = 1:m; d[i] = X[i,i]; end # extract diagonal of A⁻¹
I think it would be useful to have inv!(dest, X)
. I don’t know if after two years there is now another way to do this?
Similar to this post, there is LinearAlgebra.inv!
(currently unexported and undocumented), which works on factorization objects. So you can do:
LinearAlgebra.inv!(lu!(A))
for example, to perform in-place inversion. Or use cholesky!
instead of lu!
if you have an SPD matrix. If you want a different destination, use copyto!(dest, A)
instead of A
.
I’m computing the inverse of a Tridiagonal
matrix, and I want to place it in a pre-allocated array. In this case I cannot use in-place inverse. I would need inv!(dest, X)
or something equivalent. Any suggestions?
Don’t do this. Use the LU factorization for whatever you are trying to do.
You can solve NxN tridiagonal systems in O(N) operations. The inverse, on the other hand, is a dense matrix that throws this advantage away.
I am doing a weighted linear least squares regression and need to compute error bars. Unfortunately at some point there is a tridiagonal matrix that needs to be inverted (and I don’t see a way around this inversion).
Unless you are using A^{-1} directly, any A^{-1}b or similar is best implemented as A \ b
.
Calculating an inverse of a matrix is the numerical equivalent of code smell emanating from a medium-size trout that has been under the sofa for two weeks at 20°C.
I am.
If you are interested, this is what I’m trying to do: Weighted least squares - Wikipedia
Never use OLS formulas for computation. For WLS, there are much better options, eg premultiplying by the relevant Cholesky factor or its inverse (depending on details and how you obtain W).
W is the inverse of the covariance matrix of observation errors (errors are not independent). How you suggest I compute the covariance matrix of the parameters (denoted M^\beta in the Wikipedia link)? (I only need the variances).
Don’t — computer the Cholesky factor of M^\beta. If you have W=LL', it is fairly simple matrix algebra to work it out, but it may be better to start with the QR decomposition of the demeaned observations.
I have read many times the recommendation to avoid matrix inversion and use LU
instead (or whatever factorization is applicable). However in the particular case of linear regression, it is always pointed out that if parameter errors are variances, the inversion is generally unavoidable.
For example, from Wikipedia (there might be better sources but I’m too lazy to find them now):
Although an explicit inverse is not necessary to estimate the vector of unknowns, it is unavoidable to estimate their precision, found in the diagonal of the posterior covariance matrix of the vector of unknowns.
Also in general parameter estimation by maximum likelihood, the inverse of the Hessian of the likelihood allows one to estimate parameter errors.
As far as I am aware, these are applications where matrix inversion is unavoidable (but please correct me if I’m wrong).
I just want to make the case that inv!(dest, A)
could be useful in situations like this (specially if A
is sparse, so in-place inv!(A)
is not useful), and I suspect it should not be so hard to have?