Linear algebra of sparse matrices on GPU

Hello everyone,
I have a code with some computations in LinearAlgebra using SparseArrays.
How can I use that to be performed on GPU? Can I use Cuda.jl? what functions are available in linear algebra? In my case, I need kron, tr, eigvals, exp(exp of dense matrices of course) and of course multiplication and summation. I also used some multithreading packages like FLoops.jl for some loops in my code. How should I change them? I am completely new to this problem.
Any help would be appreciated.