Type Float32 has no field data

Hi all,

When I ran the following code snippet from JuliaAcademy,

using Printf
train_loss = Float64[]
test_loss = Float64[]
function update_loss!()
    push!(train_loss, L(trainbatch...).data)
    push!(test_loss, L(testbatch...).data)
    @printf("train loss = %.2f, test loss = %.2f\n", train_loss[end], test_loss[end])
end
Flux.train!(L, params(model), Iterators.repeated(trainbatch, 1000), opt; cb = Flux.throttle(update_loss!, 1))

I’m getting the following error:

type Float32 has no field data

Stacktrace:
 [1] getproperty(::Float32, ::Symbol) at ./Base.jl:20
 [2] update_loss!() at ./In[38]:5
 [3] (::Flux.var"#throttled#16#21"{Bool,Bool,typeof(update_loss!),Int64})(::Base.Iterators.Pairs{Union{},Union{},Tuple{},NamedTuple{(),Tuple{}}}, ::Flux.var"#throttled#20"{Flux.var"#throttled#16#21"{Bool,Bool,typeof(update_loss!),Int64}}) at /home/g2-test/.julia/packages/Flux/2i5P1/src/utils.jl:167
 [4] throttled at /home/g2-test/.julia/packages/Flux/2i5P1/src/utils.jl:163 [inlined]
 [5] macro expansion at /home/g2-test/.julia/packages/Flux/2i5P1/src/optimise/train.jl:72 [inlined]
 [6] macro expansion at /home/g2-test/.julia/packages/Juno/oLB1d/src/progress.jl:134 [inlined]
 [7] #train!#12(::Flux.var"#throttled#20"{Flux.var"#throttled#16#21"{Bool,Bool,typeof(update_loss!),Int64}}, ::typeof(Flux.Optimise.train!), ::Function, ::Zygote.Params, ::Base.Iterators.Take{Base.Iterators.Repeated{Tuple{Array{Float64,2},Flux.OneHotMatrix{Array{Flux.OneHotVector,1}}}}}, ::Descent) at /home/g2-test/.julia/packages/Flux/2i5P1/src/optimise/train.jl:66
 [8] (::Flux.Optimise.var"#kw##train!")(::NamedTuple{(:cb,),Tuple{Flux.var"#throttled#20"{Flux.var"#throttled#16#21"{Bool,Bool,typeof(update_loss!),Int64}}}}, ::typeof(Flux.Optimise.train!), ::Function, ::Zygote.Params, ::Base.Iterators.Take{Base.Iterators.Repeated{Tuple{Array{Float64,2},Flux.OneHotMatrix{Array{Flux.OneHotVector,1}}}}}, ::Descent) at ./none:0
 [9] top-level scope at In[39]:1

Can someone help me to solve this.

1 Like

I haven’t used flux, but my guess is that L(trainbatch...) and L(testbatch...) are returning a Float32 not a structure with a data element. So you change:

train_loss = Float64[]
test_loss = Float64[]

push!(train_loss, L(trainbatch...).data)
push!(test_loss, L(testbatch...).data)

to:

train_loss = Float32[]
test_loss = Float32[]

push!(train_loss, L(trainbatch...))
push!(test_loss, L(testbatch...))

Does that work?

i got the same error.

Example is outdated. Try to remove .data.

Before version 0.10, flux used a special array type to track gradients and to get the actual data you needed to access through the data property. Now that it doesn’t use this special type you get an error for trying to access the field from a raw float.

@DrChainsaw
thanks, it works