Lasso and Ridge regularization

Hello to all,

How can I include Lasso and Ridge Regularization in SciML?

I am using the Lotka-Volterra case as an example.

Original loss

function loss(θ)
    X̂ = predict(θ)
    mean(abs2, Xₙ .- X̂)
end

Is this correct?

Ridge

function loss(θ)
    X̂ = predict(θ)
    mean(abs2, Xₙ .- X̂) + lambda* sum(θ.*θ)
end

Lasso

function loss(θ)
    X̂ = predict(θ)
    mean(abs2, Xₙ .- X̂) + lambda* abs.(θ)
end

Where lambda is an empirical penalty factor.

Best Regards

I don’t know much about SciML, but this notebook does it from scratch, using the optimization package OSQP.