I’m trying to run some Julia code on my new nVidia Jetson Nano and I ran into this error:

```
> **[ Info:** Building the CUDAnative run-time library for your sm_53 device, this might take a while...
> **ERROR:** LoadError: MethodError: no method matching pow(::Complex{Float32}, ::Int64)
> Closest candidates are:
> pow(::Union{Float32, Float64}, ::Int64) at /home/doug/.julia/packages/CUDAnative/hfulr/src/device/cuda/math.jl:209
Indeed, when I checked math.jl, not all functions also support complex numbers.
The function I am broadcasting against a CuArray of Complex{Float32} values is:
```

function f(z, A, N, M)

z̅ = conj(z)

sum(A .* (z .^ N .* z̅ .^ M + z .^ M .* z̅ .^ N))

end

```
where N and M contain integers (Int64).
Am I missing something or is this capability actually missing and (hopefully) on a development roadmap?
Thanks all!
-doug
```

Many Base math functions are GPU incompatible because they use the CPU math library, that’s why we prefer to use NVIDIA’s math library. Functions from that library however do not support that many types, as you can see here. For now, you can just add (or create a PR with) the necessary methods that reuse available functionality, as in https://github.com/JuliaGPU/CUDAnative.jl/pull/466, often copying similar functionality from the Julia standard library. In the future, we hope to reuse those Base implementations by means of contextual dispatch.

1 Like

Thank you for the info. This is very new to me, this multiple-layer CUDA cake. In looking at my problem of pow(z, n), the Base method that performs this is a brute force algorithm of powers_by_squaring, essentially looping z *= z enough times to produce the result. I copied over the code from Base and slightly tweaked. It’s probably not in-linable, but it works! I’m sorta amazed, actually. I’ll submit a PL so you can have a look.

But now I have another problem, with GPU compilation failing because of non-isbit types. I’ll post a new question about that.

Thanks again!