Determinant of CUDA matrix?

Hello,

I have so far been able to use CUDA without having to dig into the actual package api (besides cu(m)) just thanks to multiple dispatch. Today I need to compute the determinant of a matrix and I would like my code to work both on the GPU and the CPU. However, the det function from LinearAlgebra is not specialized for CuArray and throws a CPU address error. I guess this is because det calls a function from a C library (BLAS ?).

I have been searching for a bit now and I just can’t find a CUDA implementation of det. So, I’d like to know if there exists a linear algebra package for CUDA that I don’t know of, or if I’ll have to learn how to efficiently compute a determinant to implement it myself and learn the internals of CUDA.jl.
I’m not sure I have the skills to do that.

MWE:

using CUDA
m = rand(10,10) |> cu
det(m)

You can try computing the LU factorization and then taking the product of the diagonal elements:

julia> M = rand(10,10);

julia> Mgpu = cu(M);

julia> det(M) # cpu
-0.10243723110926993

julia> prod(diag(LinearAlgebra.LAPACK.getrf!(M)[1])) # cpu
-0.10243723110926993

julia> prod(diag(CUSOLVER.getrf!(Mgpu)[1])) # gpu
-0.10243723f0

(Might be necessary to think more carefully about the sign though!)

2 Likes

Also, there is an open issue about determinant calculation on GPU here: [Feature Request] Determinant Function · Issue #110 · JuliaGPU/CUDA.jl · GitHub

Perhaps worth figuring out the details and making a PR.

2 Likes

Thanks, that’s what I was hoping for!
Seems to work as far as I can tell. What sign issue are you referring to?
I could open a PR in CUDA.jl, if this implementation suffice, so I won’t have to commit type piracy :grimacing:

Out of curiosity, for what application do you need the determinant?

(Note, by the way, that for a large matrix the determinant can easily overflow the maximum floating-point value; you might consider using logdet instead.)

1 Like

I’m trying to implement a multivariate gaussian neural network for a deep reinforcement learning algorithm. It will output a mean vector and a vector containing the elements of a lower triangular Cholesky decomposition. I then reconstruct the covariance matrix from the latter vector.

During training I need to take the gradient of the logpdf of the approximated MvNormal, which requires the logdet indeed. The sign issue is not a problem for me since the covariance matrix is PSD so I can simply take the abs of the implementation proposed by @carstenbauer.

Since it’s for a deep learning application, I need the logpdf function to be differentiable and preferably gpu compatible. It’s unfortunately not the case with Distributions.jl so I have to implement it from scratch. I’m making progress but now my next issue is that CUSOLVER.getrf! has no adjoint for Zygote to differentiate - that’s a problem for another post.

You can easily differentiate the determinant (or log determinant) directly. See these slides from our matrix calculus course.

1 Like

Should probably also checkout TuringLang/DistributionsAD.jl: Automatic differentiation of Distributions using Tracker, Zygote, ForwardDiff and ReverseDiff (github.com)

1 Like

Oh, glad I didn’t go all the way through the trouble before knowing about this, thanks!

@stevengj Thank you for the resources, I sure could use a more advanced Linear Algebra / Matrix Calculus course some time soon.