Is there a way to constrain the weight matrices trained by Flux? I’m interested in using Flux to train neural networks where I’d like to e.g. specify the proportion of negative and positive weights, which would simulate units that are consistently either excitatory or inhibitory. I would also like to explore low-rank weight matrices, i.e. where the matrices are combinations of a few linearly independent components. Is this achievable with Flux?

I think it’s much easier to parameterize than to constrain. For example, let’s say you want to make a `LowRankLinear`

layer. Then you would do:

```
julia> struct LowRankLinear{S, T}
w1::S
w2::T
end
julia> function LowRankLinear(m::Integer, n::Integer; rank, init=rand)
w1 = init(rank, m)
w2 = init(n, rank)
LowRankLinear(w1, w2)
end
LowRankLinear
julia> (l::LowRankLinear)(x) = l.w2 * l.w1 * x
julia> LowRankLinear(10, 20, rank=4)(rand(10, 32))
20×32 Array{Float64,2}:
...
```

Note that instead of computing things directly, if your original model is more complex, you could build it on the fly in the forward loop using `l.w2 * l.w1`

(i.e., you low-rank weight matrix).

(Unrelated, for “copy-pastability” I put `rand`

as default initializer, Flux generally uses `glorot_uniform`

.)