Weird error in Flux model

I hate to blindly dump but I’m getting a weird error with this code. I’ve tried looking for something similar, and I’ve spent some time trying to debug it and even asked GPT-4 but I’m stumped.

using cuDNN, Flux, CUDA
h = Chain(
    Dense(32 => 64, relu),   # activation function inside layer
    Dense(64 => 32, relu),   # activation function inside layer
    Dense(32 => 32, relu),   # activation function inside layer
    Dense(32 => 64, relu),   # activation function inside layer
identity)

opt = Flux.setup(Adam(0.001), h)

x = Flux.dataloader((x=𝒟, y=𝒯); batchsize=128, shuffle=true)
ε = [] # loss
@showprogress for t in 1:1000
    for (x, y) in X
        εₜ, ∇ = Flux.withgradient(h) do m
            # Evaluate model and loss inside gradient context:
            yₜ = m(x)
            Flux.mae(yₜ, y)
        end
        Flux.update!(h, opt, ∇)
        push!(ε, εₜ)  # logging, outside gradient context
    end
end

where x and y are matrices of size n*32. The error is as follows:

┌ Error: Exception while generating log record in module Main at /home/art/Work/NMR/Neural Networks in/NN_2/CPP_ver/WithLogging.jl:7
│   exception =
│    LoadError: MethodError: no method matching (::var"#31#32")(::Chain{NTuple{4, Dense{typeof(relu), Matrix{Float32}, Vector{Float32}}}}, ::@NamedTuple{data::Matrix{Float32}, label::Matrix{Float32}})
│    
│    Closest candidates are:
│      (::var"#31#32")(::Any, ::Any, !Matched::Any)
│       @ Main ~/Work/NMR/Neural Networks in/NN_2/CPP_ver/testrun.jl:45
│    
│    Stacktrace:
│      [1] macro expansion
│        @ ~/.julia/packages/Zygote/WOy6z/src/compiler/interface2.jl:0 [inlined]
│      [2] _pullback(::Zygote.Context{false}, ::var"#31#32", ::Chain{NTuple{4, Dense{typeof(relu), Matrix{Float32}, Vector{Float32}}}}, ::@NamedTuple{data::Matrix{Float32}, label::Matrix{Float32}})
│        @ Zygote ~/.julia/packages/Zygote/WOy6z/src/compiler/interface2.jl:81
│      [3] _apply(::Function, ::Vararg{Any})
│        @ Core ./boot.jl:838
│      [4] adjoint
│        @ ~/.julia/packages/Zygote/WOy6z/src/lib/lib.jl:203 [inlined]
│      [5] _pullback
│        @ ~/.julia/packages/ZygoteRules/4nXuu/src/adjoint.jl:66 [inlined]
│      [6] #4
│        @ ~/.julia/packages/Flux/UsEXa/src/train.jl:107 [inlined]
│      [7] _pullback(ctx::Zygote.Context{false}, f::Flux.Train.var"#4#5"{var"#31#32", Tuple{@NamedTuple{data::Matrix{Float32}, label::Matrix{Float32}}}}, args::Chain{NTuple{4, Dense{typeof(relu), Matrix{Float32}, Vector{Float32}}}})
│        @ Zygote ~/.julia/packages/Zygote/WOy6z/src/compiler/interface2.jl:0
│      [8] pullback(f::Function, cx::Zygote.Context{false}, args::Chain{NTuple{4, Dense{typeof(relu), Matrix{Float32}, Vector{Float32}}}})
│        @ Zygote ~/.julia/packages/Zygote/WOy6z/src/compiler/interface.jl:44
│      [9] pullback(f::Function, cx::Zygote.Context{false}, args::Chain{NTuple{4, Dense{typeof(relu), Matrix{Float32}, Vector{Float32}}}})
│        @ Zygote ~/.julia/packages/Zygote/WOy6z/src/compiler/interface.jl:42 [inlined]
│     [10] withgradient(f::Function, args::Chain{NTuple{4, Dense{typeof(relu), Matrix{Float32}, Vector{Float32}}}})
│        @ Zygote ~/.julia/packages/Zygote/WOy6z/src/compiler/interface.jl:154
│     [11] macro expansion
│        @ ~/.julia/packages/Flux/UsEXa/src/train.jl:107 [inlined]
│     [12] macro expansion
│        @ ~/.julia/packages/ProgressLogging/6KXlp/src/ProgressLogging.jl:328 [inlined]
│     [13] train!(loss::Function, model::Chain{NTuple{4, Dense{typeof(relu), Matrix{Float32}, Vector{Float32}}}}, data::MLUtils.DataLoader{@NamedTuple{data::Matrix{Float32}, label::Matrix{Float32}}, Random._GLOBAL_RNG, Val{nothing}}, opt::@NamedTuple{layers::NTuple{4, @NamedTuple{weight::Optimisers.Leaf{Optimisers.Adam, Tuple{Matrix{Float32}, Matrix{Float32}, Tuple{Float32, Float32}}}, bias::Optimisers.Leaf{Optimisers.Adam, Tuple{Vector{Float32}, Vector{Float32}, Tuple{Float32, Float32}}}, σ::Tuple{}}}}; cb::Nothing)
│        @ Flux.Train ~/.julia/packages/Flux/UsEXa/src/train.jl:105
│     [14] train!(loss::Function, model::Chain{NTuple{4, Dense{typeof(relu), Matrix{Float32}, Vector{Float32}}}}, data::MLUtils.DataLoader{@NamedTuple{data::Matrix{Float32}, label::Matrix{Float32}}, Random._GLOBAL_RNG, Val{nothing}}, opt::@NamedTuple{layers::NTuple{4, @NamedTuple{weight::Optimisers.Leaf{Optimisers.Adam, Tuple{Matrix{Float32}, Matrix{Float32}, Tuple{Float32, Float32}}}, bias::Optimisers.Leaf{Optimisers.Adam, Tuple{Vector{Float32}, Vector{Float32}, Tuple{Float32, Float32}}}, σ::Tuple{}}}})
│        @ Flux.Train ~/.julia/packages/Flux/UsEXa/src/train.jl:102
│     [15] top-level scope

I don’t get why this is throwing a method error of this type.
Should I be using a different training function for non-classification problems? Everything on the documentation seems to be image classification of some kind.

thankyou in advance!

all packages up to date at time of posting

Can you paste a reproducible example?

1 Like

full MWE

using CUDA, cuDNN, Flux, ProgressMeter

x = randn(Float32,(32,1000))
y = randn(Float32,(64,1000))

h = Chain(
    Dense(32 => 64, relu),   # activation function inside layer
    Dense(64 => 32, relu),   # activation function inside layer
    Dense(32 => 32, relu),   # activation function inside layer
    Dense(32 => 64),   # activation function inside layer
identity)

opt = Flux.setup(Adam(0.001), h)
h = gpu(h)

X = Flux.DataLoader((x=x, y=y); batchsize=128, shuffle=true)
ε = [] # loss
@showprogress for t in 1:1000
    for (x, y) in X
      x_gpu = gpu(x)
      y_gpu = gpu(y)
        εₜ, ∇ = Flux.withgradient(h) do m
            # Evaluate model and loss inside gradient context:
            yₜ = m(x)
            Flux.mae(yₜ, y)
        end
        Flux.update!(h, opt, ∇)
        push!(ε, εₜ)  # logging, outside gradient context
    end
end

Solved, basic syntax error in Flux.update!, should have passed (opt, h, del[1])

Very weird error format for this mistake though

1 Like