Flux con! warning

Hi,

I’m trying to build simple model which works with MNIST dataset. It works fine, but I get some warning and was wandering if this is something which I can improve
Here is the model:

Chain(
        Conv((5, 5), 1=>8, pad=2, stride=2, relu),
        Conv((3, 3), 8=>16, pad=1, stride=2, relu),
        Conv((3, 3), 16=>32, pad=1, stride=2, relu),
        Conv((3, 3), 32=>32, pad=1, stride=2, relu),
        GlobalMeanPool(),
        flatten,
        Dense(32, 10),
        softmax
    )

And here is the warning:

Warning: Slow fallback implementation invoked for conv!  You probably don't want this; check your datatypes.
│   yT = Float32
│   T1 = FixedPointNumbers.Normed{UInt8,8}
│   T2 = Float32
└ @ NNlib ~/.julia/packages/NNlib/haems/src/conv.jl:206

As I said the model works ok even with this warning, but I’m curious what have I done wrong here.

Thanks in advance for any tips.

Not tested but I suspect you want to convert your input intho Float32 before you start.
Since looks like hte MNIST data is stored as a FixedPointNumbers.Normed{UInt8,8}
You could do that when you loaded it as a preprocessing step.

1 Like

I see. Makes sense.
But when I try to convert the train data to float with the command

x_test = float(x_test) ./ 256

I get an error:

 MethodError: no method matching ∇meanpool(::Array{Float64,4}, ::Array{Float32,4}, ::Array{Float32,4}, ::PoolDims{2,(2, 2),(2, 2),(0, 0, 0, 0),(1, 1)})

Sorry if this is something naive, but I’m quite new to Julia.

This is a bit of a sharp corner. The initial complaint was about mixing Float32 and Normed{UInt8,8}, which meant you got a slow fallback version of conv. The new complaint is about mixing Float64 and Float32, for which it seems nobody wrote such a fallback version of ∇meanpool at all.

The solution is probably to convert more explicitly:

julia> Float32.(rand(UInt8, 2,2,1,1)) ./ 256 
2×2×1×1 Array{Float32,4}:
...

julia> float(rand(UInt8, 2,2,1,1)) # turns out to give Float64
2×2×1×1 Array{Float64,4}:
1 Like

It worked, Thank you!
BTW it looks that the MNIST dataset has values in range 0-1 even if it is not float.
So just converting to float is enough. Interesting type (FixedPointNumbers.Normed) :slight_smile:

And it is also faster this way (what was expected)