Scalar indexing error when loading MNIST

The following code in a fresh session produces a scalar indexing error:

using Flux
using MLDatasets
categories = 0:9
xtrain, ytrain = MNIST.traindata(Float32)
xtest,  ytest  = MNIST.testdata(Float32)
ytrain, ytest = Flux.onehotbatch(ytrain, categories) |> gpu, Flux.onehotbatch(ytest, categories) |> gpu

What am I doing wrong, and how could I fix it?

I have found the sinner. Changing the original code to the following results in no error:

using Flux
using MLDatasets
categories = 0:9
xtrain, ytrain = MNIST.traindata(Float32)
xtest,  ytest  = MNIST.testdata(Float32)
ytrain = Flux.onehotbatch(ytrain, categories) |> gpu
ytest = Flux.onehotbatch(ytest, categories) |> gpu

I am note sure if it makes any difference, as it is possible that scalar indexing only occurs in the assignment, but this does not trigger the error.

The assignment should be succeeding fine, but it appears scalar indexing is happening in the repl when trying to show the tuple of ytrain, xtest in the first example:

julia> Flux.onehotbatch(1:2, 1:2) |> cpu, Flux.onehotbatch(1:2, 1:2) |> cpu
(Bool[1 0; 0 1], Bool[1 0; 0 1])

julia> Flux.onehotbatch(1:2, 1:2) |> gpu, Flux.onehotbatch(1:2, 1:2) |> gpu
(Bool[Error showing value of type Tuple{Flux.OneHotArray{UInt32, 2, 1, 2, CuArray{UInt32, 1, CUDA.Mem.DeviceBuffer}}, Flux.OneHotArray{UInt32, 2, 1, 2, CuArray{UInt32, 1, CUDA.Mem.DeviceBuffer}}}:
ERROR: Scalar indexing is disallowed.
Invocation of getindex resulted in scalar indexing of a GPU array.
This is typically caused by calling an iterating implementation of a method.
Such implementations *do not* execute on the GPU, but very slowly on the CPU,
and therefore are only permitted from the REPL for prototyping purposes.
If you did intend to index this array, annotate the caller with @allowscalar.
...

I’ve opened https://github.com/FluxML/Flux.jl/issues/1905 to track this.

1 Like