Flux train all parameters except those equal to 0

I have a neural network model trained on the mnist dataset.
If I add to the loss function a l1 regularization term I get many parameters near 0.
But if I set manually those parameters to 0 the model performs worse and I have to train the model a little more to get the same performance.

How can I train all parameters except those equal to 0?
Thank you.

1 Like
p = params(model)
ott = ADAM()
for epoch in 1:epochs
    for (x, y) in data
        gr = gradient(p) do 
            loss(x, y)

        for i in 1:length(p)
            gr[p[i]][p[i] .== 0f0] .= 0f0    
        Flux.update!(ott, p, gr)

this solution is inspired by this one: How to freeze a single weight of a layer? - Usage - JuliaLang

1 Like