I don’t understand the error
Mutating arrays is not supported. I couldn’t understand the previous posts on this topic either.
I’m trying to add a very standard weight decay regulariser
R(model) to my model simply defined as
sqnorm(x) = sum(abs2, x) just as the tutorial webpages define it.
And in doing
gradient( () -> R(model), Flux.params(model)) I get the annoying message above.
But it doesn’t end there!
I created a custom layer as a struct with positives and negative weights and biases. I get the same message when I apply
gradient(() -> Flux.Losses.logitcrossentropy(mapslices(model, x_train, dims = 1), y_train), Flux.params(model))
I get the same error message. Actually this also happens when I create a vanilla chained Dense layers. So my custom layer may also contain issues, but the problem seems to be even more basic.
I would very much appreciate if somebody could explain what simple dumb thing I’m doing here. (probably also affecting the performance, is it normal that a full batch loss evaluation takes about 5 seconds on MNIST for a neural network with 28x28, 600, 200, 10 layers?)