Flux train all parameters except those equal to 0

I have a neural network model trained on the mnist dataset.
If I add to the loss function a l1 regularization term I get many parameters near 0.
But if I set manually those parameters to 0 the model performs worse and I have to train the model a little more to get the same performance.

How can I train all parameters except those equal to 0?
Thank you.

This works but maybe there is a better way to do that:

opt = ADAM()
for epoch in 1:EPOCHS
    for (x, y) in data
        ∇ = gradient(() -> loss(x, y), p)       # compute gradient respect to every parameter
        for i in 1:length(p)
            Flux.update!(opt,                   # update only
                         p[i][p[i] .≠ 0f0],     # p[i] different than 0
                         ∇[p[i]][p[i] .≠ 0f0])  # ∇[p[i]] relative to p[i] different than 0
        end
    end
end