Conv_mnist model-zoo example,

I loaded mnist test data from Kaggle, i wanted to test my pretrained model from conv_mnist, how to use pretrained model to predict image ?
It says:

LeNet5 “constructor”.

The model can be adapted to any image size

and any number of output classes.

but

img = test[1011,:] |> Array
img = vec(img)
img = img/255
img = reshape(img,28,28)
img = transpose(img)
Gray.(img)// image of 2

@load “runs/mnist.bson” model
y_pred = model(img)
r = findmax(y_pred) .- (0,1)
r[2]
error : DimensionMismatch(“Rank of x and w must match! (2 vs. 4)”)

I think the model expects batches of single channel images, try changing your reshape from reshape(img, 28, 28) to reshape(img, 28, 28, 1, 1)

now i got this error:

MethodError: no method matching -(::CartesianIndex{2}, ::Int64)
Closest candidates are:
-(::T, ::T) where T<:Union{Int128, Int16, Int32, Int64, Int8, UInt128, UInt16, UInt32, UInt64, UInt8} at int.jl:86
-(::ForwardDiff.Dual{Tx, V, N} where {V, N}, ::Integer) where Tx at C:\Users\andre.julia\packages\ForwardDiff\5gUap\src\dual.jl:139
-(::ForwardDiff.Dual{Tx, V, N} where {V, N}, ::Real) where Tx at C:\Users\andre.julia\packages\ForwardDiff\5gUap\src\dual.jl:139

Stacktrace:
[1] _broadcast_getindex_evalf
@ .\broadcast.jl:648 [inlined]
[2] _broadcast_getindex
@ .\broadcast.jl:621 [inlined]
[3] (::Base.Broadcast.var"#19#20"{Base.Broadcast.Broadcasted{Base.Broadcast.Style{Tuple}, Nothing, typeof(-), Tuple{Tuple{Float32, CartesianIndex{2}}, Tuple{Int64, Int64}}}})(k::Int64)
@ Base.Broadcast .\broadcast.jl:1098
[4] ntuple
@ .\ntuple.jl:49 [inlined]
[5] copy
@ .\broadcast.jl:1098 [inlined]
[6] materialize(bc::Base.Broadcast.Broadcasted{Base.Broadcast.Style{Tuple}, Nothing, typeof(-), Tuple{Tuple{Float32, CartesianIndex{2}}, Tuple{Int64, Int64}}})
@ Base.Broadcast .\broadcast.jl:883
[7] top-level scope
@ In[17]:11
[8] eval
@ .\boot.jl:360 [inlined]
[9] include_string(mapexpr::typeof(REPL.softscope), mod::Module, code::String, filename::String)
@ Base .\loading.jl:1116

this works:

input

img = test[5011,:] |> Array
img = vec(img)
img = img/255

img = reshape(img, 28, 28, 1, :slight_smile:

img = transpose(img)
img = reshape(img,28,28,1,:slight_smile:
y_pred = model(img)
print(findmax(y_pred))

i thought kaggle mnist images were single channel

That’s true, but the model still expects a channel dimension and a batch dimension so you need to add those length-one axes.

Please see Please read: make it easier to help you. In particular, we would need a complete MWE with proper code block formatting on Discourse (you can edit your posts, no need to create new ones).

More specifically, is there a reason you’re trying to use the version of MNIST hosted on Kaggle? MLDatasets.jl (which Flux uses for all of its demos) already has MNIST, and getting the data as an array is as simple as calling https://juliaml.github.io/MLDatasets.jl/stable/datasets/MNIST/#MLDatasets.MNIST.traintensor.

aa okey, i want use Fluxml for kaggle competition. for training im using model from MLDataset,
just for testing purpose i wanted to evaluate dataset from Kaggle

img = test[1, :] |> Array
img = img/255 # 0 to 1
img = reshape(img,28,28)
img = transpose(img)
Gray.(img)// shows image of 2
@load "runs/mnist.bson" model
img = test[1,:] |> Array
# img = vec(img)
img = img/255

# img = reshape(img, 28, 28)
img = transpose(img)
Gray.(img)

img = reshape(img,28,28,1,:)
y_pred = model(img)
print(y_pred)
# r = findmax(y_pred) .- (0,1)
# r[2]

Seems like you already figured out how to do that?

yes, thanks for help