Not sure if it exists, but you can always create something to enforce that yourself. Here I just keep the params as a real vector, and I do e^{ps} before setting them as weights to make sure all are positive.
x, y = randn(2, 100), randn(1, 100)
model_shape = Chain(Dense(2, 10, tanh), Dense(10, 1))
ps, g = Flux.destructure(model_shape)
get_model(ps) = g(exp.(ps)) # Here we make sure they are positive
loss(x, y, model) = sum(abs2, model(x) .- y)
gs = Flux.gradient(ps -> loss(x, y, get_model(ps)), ps)
opt = ADAM()
Flux.update!(opt, ps, gs[1])