Constrain weights and biases to be positive

Is it possible to use a custom constrained optimizer or to modify the gradients to only allow for positive weights and biases?

I want to say this has been discussed before on Discourse, but not being able to find a thread I’d suggest looking at Feature request: Modifying Dense Layer to accommodate kernel/bias constraints and kernel/bias regularisation · Issue #1389 · FluxML/Flux.jl · GitHub. Note that we are moving away from this “implicit” Params model into storing both model weights and gradients in proper structures. If you’d like something more futureproof, have a look at Home · Optimisers.jl.

1 Like

Thanks. I tried the regularization path but I couldn’t make it work, I am looking into the custom optimizer now :slight_smile:

1 Like

An alternative approach is to introduce an activation function for the weights. That way the weights “as applied” are non-negative, but they posses a real valued learnable “latent state”.

using Flux
       
m = Dense(3, 2, relu)
x = rand(Float32, 3, 5)
(a::Dense)(x::AbstractVecOrMat, g) = a.σ.(g.(a.weight)*x .+ g.(a.bias))

y1 = m(x) # normal forward pass
y2 = m(x, relu) # forward pass with non-negative weights
2 Likes

Oooh that is a good idea, now maybe stupid question…

I don’t see any problem using a normal gradient descent, but do you think it would create some problemsusing a method requiring an Hessian (stochastic LBFGS) for training since it depends on the information on the previous gradient? (that was the main reason why I was looking if it was possible to have a constrained optimizer :slight_smile: