Flux gpu gradient failing

Hi, I have been using Flux for about 2 months now and my NN got to the point where GPU usage would be great to speed things up however I am running onto some weird issues:
I get an error of
CuArray only supports bits types
when i define
loss(x, y) = Flux.mse(model(x), y) + weight_reg*sum(norm, params(model))
but it runs without regularisation! (i added the file w/ the full error stack trace)
i can eval the loss with regularisation, which is why i think it stems from gradient()

my env is
Flux v0.11.1,
Zygote v0.5.4 #master
CUDA v1.2.1

on another note, I ran a couple of epochs w/o weight reg to see if there would be other problems, and it seems to be slower than I expected. it does seem to be equal or faster than CPU, but not by much. any ideas why?

julia> train!(loss, ps, data[1:3], opt)
ERROR: CuArray only supports bits types
Stacktrace:
 [1] error(::String) at ./error.jl:33
 [2] CuArray{AbstractFloat,2}(::UndefInitializer, ::Tuple{Int64,Int64}) at /home/victorialena/.julia/packages/CUDA/7vLVC/src/array.jl:115
 [3] similar(::CuArray{Float64,2}, ::Type{AbstractFloat}, ::Tuple{Int64,Int64}) at /home/victorialena/.julia/packages/CUDA/7vLVC/src/array.jl:147
 [4] similar(::CuArray{Float64,2}, ::Type{AbstractFloat}) at ./abstractarray.jl:625
 [5] copyto_nonleaf!(::CuArray{Float64,2}, ::Base.Broadcast.Broadcasted{CUDA.CuArrayStyle{2},Tuple{Base.OneTo{Int64},Base.OneTo{Int64}},typeof(Zygote.drelu),Tuple{Base.Broadcast.Extruded{CuArray{Float32,2},Tuple{Bool,Bool},Tuple{Int64,Int64}},Base.Broadcast.Extruded{CuArray{Float64,2},Tuple{Bool,Bool},Tuple{Int64,Int64}}}}, ::CartesianIndices{2,Tuple{Base.OneTo{Int64},Base.OneTo{Int64}}}, ::CartesianIndex{2}, ::Int64) at ./broadcast.jl:1010
 [6] copy at ./broadcast.jl:858 [inlined]
 [7] materialize at ./broadcast.jl:820 [inlined]
 [8] (::Zygote.var"#1201#1202"{CuArray{Float32,2}})(::CuArray{Float64,2}) at /home/victorialena/.julia/packages/Zygote/i0EHi/src/lib/nnlib.jl:7
 [9] #4004#back at /home/victorialena/.julia/packages/ZygoteRules/6nssF/src/adjoint.jl:49 [inlined]
 [10] Dense at /home/victorialena/.julia/packages/Flux/05b38/src/layers/basic.jl:123 [inlined]
 [11] (::typeof(∂(invoke)))(::CuArray{Float64,2}) at /home/victorialena/.julia/packages/Zygote/i0EHi/src/compiler/interface2.jl:0
 [12] Dense at /home/victorialena/.julia/packages/Flux/05b38/src/layers/basic.jl:134 [inlined]
 [13] (::typeof(∂(λ)))(::CuArray{Float64,2}) at /home/victorialena/.julia/packages/Zygote/i0EHi/src/compiler/interface2.jl:0
 [14] applychain at /home/victorialena/.julia/packages/Flux/05b38/src/layers/basic.jl:36 [inlined]
 [15] (::typeof(∂(applychain)))(::CuArray{Float64,2}) at /home/victorialena/.julia/packages/Zygote/i0EHi/src/compiler/interface2.jl:0
 [16] applychain at /home/victorialena/.julia/packages/Flux/05b38/src/layers/basic.jl:36 [inlined]
 [17] (::typeof(∂(applychain)))(::CuArray{Float64,2}) at /home/victorialena/.julia/packages/Zygote/i0EHi/src/compiler/interface2.jl:0
 [18] applychain at /home/victorialena/.julia/packages/Flux/05b38/src/layers/basic.jl:36 [inlined]
 [19] (::typeof(∂(applychain)))(::CuArray{Float64,2}) at /home/victorialena/.julia/packages/Zygote/i0EHi/src/compiler/interface2.jl:0
 [20] applychain at /home/victorialena/.julia/packages/Flux/05b38/src/layers/basic.jl:36 [inlined]
 [21] (::typeof(∂(applychain)))(::CuArray{Float64,2}) at /home/victorialena/.julia/packages/Zygote/i0EHi/src/compiler/interface2.jl:0
 [22] applychain at /home/victorialena/.julia/packages/Flux/05b38/src/layers/basic.jl:36 [inlined]
 [23] (::typeof(∂(applychain)))(::CuArray{Float64,2}) at /home/victorialena/.julia/packages/Zygote/i0EHi/src/compiler/interface2.jl:0
 [24] applychain at /home/victorialena/.julia/packages/Flux/05b38/src/layers/basic.jl:36 [inlined]
 [25] (::typeof(∂(applychain)))(::CuArray{Float64,2}) at /home/victorialena/.julia/packages/Zygote/i0EHi/src/compiler/interface2.jl:0
 [26] applychain at /home/victorialena/.julia/packages/Flux/05b38/src/layers/basic.jl:36 [inlined]
 [27] (::typeof(∂(applychain)))(::CuArray{Float64,2}) at /home/victorialena/.julia/packages/Zygote/i0EHi/src/compiler/interface2.jl:0
 [28] Chain at /home/victorialena/.julia/packages/Flux/05b38/src/layers/basic.jl:38 [inlined]
 [29] (::typeof(∂(λ)))(::CuArray{Float64,2}) at /home/victorialena/.julia/packages/Zygote/i0EHi/src/compiler/interface2.jl:0
 [30] loss at ./REPL[42]:1 [inlined]
 [31] (::typeof(∂(loss)))(::Float64) at /home/victorialena/.julia/packages/Zygote/i0EHi/src/compiler/interface2.jl:0
 [32] #177 at /home/victorialena/.julia/packages/Zygote/i0EHi/src/lib/lib.jl:178 [inlined]
 [33] #1717#back at /home/victorialena/.julia/packages/ZygoteRules/6nssF/src/adjoint.jl:49 [inlined]
 [34] #15 at /home/victorialena/.julia/packages/Flux/05b38/src/optimise/train.jl:83 [inlined]
 [35] (::typeof(∂(λ)))(::Float64) at /home/victorialena/.julia/packages/Zygote/i0EHi/src/compiler/interface2.jl:0
 [36] (::Zygote.var"#54#55"{Params,Zygote.Context,typeof(∂(λ))})(::Float64) at /home/victorialena/.julia/packages/Zygote/i0EHi/src/compiler/interface.jl:177
 [37] gradient(::Function, ::Params) at /home/victorialena/.julia/packages/Zygote/i0EHi/src/compiler/interface.jl:54
 [38] macro expansion at /home/victorialena/.julia/packages/Flux/05b38/src/optimise/train.jl:82 [inlined]
 [39] macro expansion at /home/victorialena/.julia/packages/Juno/hEPx8/src/progress.jl:134 [inlined]
 [40] train!(::Function, ::Params, ::Array{Tuple{CuArray{Float32,2},CuArray{Float32,2}},1}, ::ADAM; cb::Flux.Optimise.var"#16#22") at /home/victorialena/.julia/packages/Flux/05b38/src/optimise/train.jl:80
 [41] train!(::Function, ::Params, ::Array{Tuple{CuArray{Float32,2},CuArray{Float32,2}},1}, ::ADAM) at /home/victorialena/.julia/packages/Flux/05b38/src/optimise/train.jl:78
 [42] top-level scope at REPL[43]:100:

I have seen this error before. Can you post enough code for a MWE? See Please read: make it easier to help you - #81

The error suggests to me that CuArray(v) where isbits(eltype(v)) is false but I don’t know where or if the message is accurate to the tee.

1 Like
using Flux, CUDA
using LinearAlgebra: norm

m = Chain(Dense(10, 5, relu), Dropout(0.0), Dense(5, 5))  |> gpu
opt = Flux.Optimise.ADAM()
loss(x,y) = Flux.Losses.mse(m(x), y)+0.1*sum(norm, Flux.params(m))
Flux.train!(loss, Flux.params(m), [(rand(10,3), rand(5,3)) |> gpu],opt)

gets me a ERROR: CuArray only supports bits types msg

1 Like

I suspect it’s a bug worth reporting to Zygote.jl

This works for me but I don’t really know why except that 0.1 .* something seems to get compiled to some index accessing code but something ./ 10 is fine. And I don’t know why except that it’s a bug in Zygote.jl; also something ./ 10.0 also doesn’t work. I am completely confused.

I suggest you turn on CUDA.allowscalar(false).

CUDA.allowscalar(false)
p = Flux.params(m)

fn(p1) = sum(abs.(p1))

loss(x,y) = begin
    Flux.Losses.mse(m(x), y) + sum(fn, p) / 10
end


xy = (rand(10,3), rand(5,3)) |> gpu
a = loss(xy...)
typeof(a)

Flux.train!(loss, p, [(rand(10,3), rand(5,3)) |> gpu], opt)
2 Likes

Bug report here https://github.com/FluxML/Zygote.jl/issues/770

thanks for your help on that. i had to redefine your fn so actually evaluates to the norm, but i ended up running

ps = Flux.params(m)
norm(x)  = sqrt(sum(abs2, p1)))
loss(x,y) = Flux.Losses.mse(m(x), y) + sum(norm, ps) / Int(round(1/weight_reg))

it’s strange that the norm as defined in LinAlg pkg is not diff through Zygote.
anyways, I hope other people with this issue see our thread. :slight_smile: