Matrix Exponential of GPU matrix

I am trying to calculate the matrix exponent of a matrix stored on the GPU (via CUDA), but I get the error:

ERROR: ArgumentError: cannot take the CPU address of a CuArray{Float32, 2, CUDA.
Mem.DeviceBuffer}

Minimal working example:

using CUDA
A = cu([1.0 0;0 1])
CUDA.exp(A) (This command gives the error message)
exp(A) (This command gives both the error message and a scalar indexing warning in the REPL)

I’m sure there is some standard method to calculate the exponent of a GPU matrix, but I can’t tell what I’m doing wrong.

ExponentialUtilities.jl’s exponential! is GPU compatible IIRC.

Sadly, exponential! causes the same error to be thrown.