# Problem defining a Neural Network with gradient as Loss function

Hi, I’m trying to setup a simple network who’s Loss Function is the Laplacian of the NN with respect to the gradient. Then, the loss function is:

``````function 𝕃1(x)
f(j) =  NNsolver(j) #Output of the NN
∇²(n) = Zygote.diaghessian(n->f(n)[1],n) #Diagonal hessian (since 1D)
∇²(x)[1][1] #Output a scalar
end
``````

And the training process is also pretty straight forward:

``````function trainer()
for point=Rx
p=[point]
gs = gradient(params(NNsolver)) do # NN model
𝕃1(p)
end
update!(opt, params(NNsolver), gs)
end

end
``````

Nonetheless, the function call outputs the following error (which I dont understand since I am not reshaping any array for example):

``````Mutating arrays is not supported -- called setindex!(::Vector{Float64}, _...)
``````

Would really appreciate any help and thanks in advance!

Without an MWE that includes enough of `NNsolver` to trigger the error, we won’t have enough information to help you troubleshoot this. A full stack trace would be very much appreciated as well.