Complex numbers on GPU

Hello,

I am trying to work on complex numbers.

a=[0 1;-1 0]
@time f=eigfact(a) #takes 0.000092 seconds
a=convert(Array{Complex64,2},a)
@time f=eigfact(a) #takes 0.062753 seconds

And then move the computation to GPU which does not seem to work.

using CUDAnative
using CuArrays
b=cu(a)
@time f=eigfact(b) #Throws following error message

#=
julia> @time f=eigfact(b)
ERROR: InexactError()
Stacktrace:
[1] trunc(::Type{Int64}, ::Float64) at ./float.jl:672
[2] poolidx(::Int64) at /home/malay/.julia/v0.6/CuArrays/src/memory.jl:31
[3] alloc(::Int64) at /home/malay/.julia/v0.6/CuArrays/src/memory.jl:203
[4] Type at /home/malay/.julia/v0.6/CuArrays/src/array.jl:34 [inlined]
[5] similar at /home/malay/.julia/v0.6/CuArrays/src/array.jl:44 [inlined]
[6] similar at ./abstractarray.jl:524 [inlined]
[7] geevx!(::Char, ::Char, ::Char, ::Char, ::CuArray{Complex{Float32},2}) at ./linalg/lapack.jl:2108
[8] #eigfact!#38(::Bool, ::Bool, ::Function, ::CuArray{Complex{Float32},2}) at ./linalg/eigen.jl:65
[9] (::Base.LinAlg.#kw##eigfact!)(::Array{Any,1}, ::Base.LinAlg.#eigfact!, ::CuArray{Complex{Float32},2}) at ./:0
[10] #eigfact#39(::Bool, ::Bool, ::Function, ::CuArray{Complex{Float16},2}) at ./linalg/eigen.jl:103
[11] eigfact(::CuArray{Complex{Float16},2}) at ./linalg/eigen.jl:102

=#

I even tried a sine function which errored as well

CUDAnative.sin(a)

ERROR: MethodError: no method matching sin(::Array{Complex{Float32},2})
You may have intended to import Base.sin
Closest candidates are:
sin(::Float32) at /home/malay/.julia/v0.6/CUDAnative/src/device/libdevice.jl:13
sin(::Float64) at /home/malay/.julia/v0.6/CUDAnative/src/device/libdevice.jl:12
sin(::ForwardDiff.Dual{T,V,N} where N where V<:Real) where T at /home/malay/.julia/v0.6/ForwardDiff/src/dual.jl:166

I am not sure if my approach is incorrect or my installation is incorrect or the functionality is not implemented yet. Please advise.
I am on Linux Mint, Cuda 9.1, Julia 6.4, gtx1060

My motive is to find the eigenvalues of complex matrixes and diagonalize them and then perform further calculations. I was thinking of moving the calculations to GPU to expedite them. I am not sure if GPU acceleration is possible for all complex number operations. I did see another thread here which states that eigen decomposition is not possible.

Futher, if someone has a nice documentation link for CuArrays and GPUArrays, it would be nice.
Thanks in advance.

There is a documentation for GPUArrays. For GPUArrays and CuArrays, you sould look at their examples. In your case, CUDAnative.sin(a) should be replaced by CuArrays.sin(a).

Finally, the function you are looking may exist in cusolver. It is inside CuArrays. Have a look at the tests and sources.

1 Like

That’s not accurate. CUDAnative.sin is meant to be used in device code, working with scalars. That’s why OP got the MethodError trying to apply it to an Array. CuArrays.sin does not exist, and just refers to Base.sin, which can be called on an Array but that’s deprecated. You should be broadcasting sin, in which case you have to use CUDAnative.sin when dealing with a CuArray (because of Rewrite calls to eg. `Base.sin` · Issue #112 · JuliaGPU/GPUCompiler.jl · GitHub)

julia> CuArrays.sin === Base.sin
true

# not meant to be used on arrays
julia> CUDAnative.sin(a)
ERROR: MethodError: no method matching sin(::CuArray{Float32,1})

# deprecated, and calls the wrong sin function
julia> Base.sin(a)
WARNING: sin(x::AbstractArray{T}) where T <: Number is deprecated, use sin.(x) instead.
ERROR: CUDA error: an illegal memory access was encountered (code #700, ERROR_ILLEGAL_ADDRESS)

# correct invocation
julia> CUDAnative.sin.(a)
1-element CuArray{Float32,1}:
 0.841471

That’s a bug in CuArrays. I don’t have time to look into this, but I’ve filed an issue: InexactError in pool allocator · Issue #101 · JuliaGPU/CuArrays.jl · GitHub

Thanks a lot for both of your response. I appreciate it. It helped me.
I will track the issue on github.