# Exponents with CuArrays

I’m trying out what I thought were some basic array operations with CuArrays, and have discovered that I can’t do exponents (a.^b) with CuArrays. Here’s a minimal example:

``````julia> m = CuArrays.cu([1])
1-element CuArrays.CuArray{Float32,1}:
1.0

julia> m .+ m
1-element CuArrays.CuArray{Float32,1}:
2.0

julia> m .* m
1-element CuArrays.CuArray{Float32,1}:
1.0

julia> m .^ m
Reason: unsupported call through a literal pointer (call to jl_alloc_string)
``````

Can anyone help me understand and work around this error? What I’m actually trying to do is more like the following.

``````julia> m = [.4 .6]
1×2 Array{Float64,2}:
0.4  0.6

julia> p = [3 0; 2 1; 1 2; 0 3]
4×2 Array{Int64,2}:
3  0
2  1
1  2
0  3

julia> d = [1; 3; 3; 1]
4-element Array{Int64,1}:
1
3
3
1

julia> prod(m.^p, dims=2).*d
4×1 Array{Float64,2}:
0.064
0.288
0.432
0.216
``````

Oh interesting, I hadn’t seen that error before: looks like it comes from the string failure message that is constructed in the case of a domain error. I’ve filed an issue: https://github.com/JuliaGPU/CUDAnative.jl/issues/367

Note that exponentation by a floating point number is problematic anyhow due to an LLVM issue: https://github.com/JuliaGPU/CUDAnative.jl/issues/113

Constant integer exponentation works though:

``````julia> m = CuArrays.cu([2])
1-element CuArray{Float32,1}:
2.0

julia> m .^ 2
1-element CuArray{Float32,1}:
4.0
``````
1 Like

For a workaround, which currently still applies to many Base mathematical operations: have a look at CUDAnative and whether it provides a GPU compatible alternative. In the case of exponentation, that’s `CUDAnative.pow`:

``````help?> CUDAnative.pow
No documentation found.

CUDAnative.pow is a Function.

# 4 methods for generic function "pow":
[1] pow(x::Float32, y::Int32) in CUDAnative at /home/tbesard/Julia/CUDAnative/src/device/cuda/libdevice.jl:197
[2] pow(x::Float64, y::Int32) in CUDAnative at /home/tbesard/Julia/CUDAnative/src/device/cuda/libdevice.jl:196
[3] pow(x::Float32, y::Float32) in CUDAnative at /home/tbesard/Julia/CUDAnative/src/device/cuda/libdevice.jl:194
[4] pow(x::Float64, y::Float64) in CUDAnative at /home/tbesard/Julia/CUDAnative/src/device/cuda/libdevice.jl:193

julia> m = CuArrays.cu([2])
1-element CuArray{Float32,1}:
2.0

julia> CUDAnative.pow.(m, m)
1-element CuArray{Float32,1}:
4.0
``````

In the future, once we get Cassette working properly, we’ll be dispatching to these compatible methods automatically.

2 Likes