Error in Fitting a Straight Line Tutorial

I apologize if this has been covered before. I did some searching but didn’t find another mention of it.

I’ve been trying to work through the Fitting a Straight Line tutorial on the fluxml.ai site and keep running into an error.

When I get to the part that calls for the line:

train!(loss, predict, data, opt)

I get the following error:

ERROR: Mutating arrays is not supported – called copyto!(Matrix{Float32}, …)
This error occurs when you ask Zygote to differentiate operations that change
the elements of arrays in place (e.g. setting values with x .= …)

Possible fixes:

  • avoid mutating operations (preferred)
  • or read the documentation and solutions for this error
    Limitations · Zygote

Stacktrace:
[1] error(s::String)
@ Base ./error.jl:35
[2] _throw_mutation_error(f::Function, args::Matrix{Float32})
@ Zygote ~/.julia/packages/Zygote/g2w9o/src/lib/array.jl:86
[3] (::Zygote.var"#395#396"{Matrix{Float32}})(#unused#::Matrix{Float32})
@ Zygote ~/.julia/packages/Zygote/g2w9o/src/lib/array.jl:101
[4] (::Zygote.var"#2498#back#397"{Zygote.var"#395#396"{Matrix{Float32}}})(Δ::Matrix{Float32})
@ Zygote ~/.julia/packages/ZygoteRules/AIbCs/src/adjoint.jl:67
[5] Pullback
@ ./broadcast.jl:871 [inlined]
[6] Pullback
@ ./broadcast.jl:868 [inlined]
[7] Pullback
@ ./broadcast.jl:864 [inlined]
[8] Pullback
@ ~/projects/julia/LearnFlux/src/LearnFlux.jl:17 [inlined]
[9] (::typeof(∂(loss)))(Δ::Float32)
@ Zygote ~/.julia/packages/Zygote/g2w9o/src/compiler/interface2.jl:0
[10] (::Zygote.var"#208#209"{Tuple{Tuple{Nothing}, Tuple{Nothing, Nothing}}, typeof(∂(loss))})(Δ::Float32)
@ Zygote ~/.julia/packages/Zygote/g2w9o/src/lib/lib.jl:206
[11] (::Zygote.var"#2066#back#210"{Zygote.var"#208#209"{Tuple{Tuple{Nothing}, Tuple{Nothing, Nothing}}, typeof(∂(loss))}})(Δ::Float32)
@ Zygote ~/.julia/packages/ZygoteRules/AIbCs/src/adjoint.jl:67
[12] Pullback
@ ~/.julia/packages/Flux/OxB4x/src/train.jl:107 [inlined]
[13] (::typeof(∂(λ)))(Δ::Float32)
@ Zygote ~/.julia/packages/Zygote/g2w9o/src/compiler/interface2.jl:0
[14] (::Zygote.var"#60#61"{typeof(∂(λ))})(Δ::Float32)
@ Zygote ~/.julia/packages/Zygote/g2w9o/src/compiler/interface.jl:45
[15] withgradient(f::Function, args::Flux.Dense{typeof(identity), Matrix{Float32}, Vector{Float32}})
@ Zygote ~/.julia/packages/Zygote/g2w9o/src/compiler/interface.jl:133
[16] macro expansion
@ ~/.julia/packages/Flux/OxB4x/src/train.jl:107 [inlined]
[17] macro expansion
@ ~/.julia/packages/ProgressLogging/6KXlp/src/ProgressLogging.jl:328 [inlined]
[18] train!(loss::Function, model::Flux.Dense{typeof(identity), Matrix{Float32}, Vector{Float32}}, data::Vector{Tuple{Matrix{Int64}, Matrix{Int64}}}, opt::NamedTuple{(:weight, :bias, :σ), Tuple{Optimisers.Leaf{Optimisers.Descent{Float64}, Nothing}, Optimisers.Leaf{Optimisers.Descent{Float64}, Nothing}, Tuple{}}}; cb::Nothing)
@ Flux.Train ~/.julia/packages/Flux/OxB4x/src/train.jl:105
[19] train!(loss::Function, model::Flux.Dense{typeof(identity), Matrix{Float32}, Vector{Float32}}, data::Vector{Tuple{Matrix{Int64}, Matrix{Int64}}}, rule::Optimisers.Descent{Float64}; cb::Nothing)
@ Flux.Train ~/.julia/packages/Flux/OxB4x/src/train.jl:118
[20] train!(loss::Function, model::Flux.Dense{typeof(identity), Matrix{Float32}, Vector{Float32}}, data::Vector{Tuple{Matrix{Int64}, Matrix{Int64}}}, opt::Flux.Optimise.Descent; cb::Nothing)
@ Flux ~/.julia/packages/Flux/OxB4x/src/deprecations.jl:124
[21] train!(loss::Function, model::Flux.Dense{typeof(identity), Matrix{Float32}, Vector{Float32}}, data::Vector{Tuple{Matrix{Int64}, Matrix{Int64}}}, opt::Flux.Optimise.Descent)
@ Flux ~/.julia/packages/Flux/OxB4x/src/deprecations.jl:124
[22] top-level scope
@ REPL[11]:1

Is this just a case where the tutorial hasn’t been updated to reflect some new change?

The first time through I typed the text, but just to be sure I copy and pasted it on the second and got the same results.

Here is enough code to get to the point where the error occurs:

using Flux
using Flux: train!
using Statistics

actual(x) = 4x + 2
x_train, x_test = hcat(0:5...), hcat(6:10...)
y_train, y_test = actual.(x_train), actual.(x_test)

model = Dense(1 => 1)

loss(model, x, y) = mean(abs2.(model(x) .= y))
predict = Dense(1 => 1)
data = [(x_train, y_train)]
opt = Descent()

train!(loss, predict, data, opt)

Any insight would be much appreciated.

Well that’s embarrassing. :smiling_face_with_tear:

Upon further examination I found that I had a typo and somehow didn’t end up cutting and pasting that line when I re-did the code I’d written by hand.

The line

loss(model, x, y) = mean(abs2.(model(x) .= y))

should have read

loss(model, x, y) = mean(abs2.(model(x) .- y))

Then it works just fine. Decades later and I can still get caught by a punctuation error once in a while. :sweat_smile:

1 Like