Love everyone’s contributions… Hate to say it but I hard a hard time finding something around 7 LOC to submit :D. But, I did think of something I did a while back.
mutable struct βLASSO η::Float32 λ::Float32 β::Float32 end βLASSO(η = Float32(0.01), λ = Float32(0.009), β = Float32(50.0)) = βLASSO(η, λ, β) function apply!(o::βLASSO, x, Δ) Δ = o.η .* ( Δ .+ ( o.λ .* sign.( Δ ) ) ) Δ = Δ .* Float32.( abs.( x.data ) .> ( opt.β * opt.λ ) ) return Δ end
(full sloppy gist here: https://gist.github.com/caseykneale/b21c4c6cf5119c58d4f933baac16136b)
I was reading the following paper: https://arxiv.org/pdf/2007.13657.pdf around when it was first printed. The paper describes a projected gradient descent method for a penalized(β) LASSO regularization. In < 30min I was able to write my own optimizer, implement the majority of the paper and play with the loss function using Flux.jl. Below is a synthetic example of their loss function properly selecting a single variable of interest while minimizing the contributions of the others.
The tools in the ecosystem and the language facilitate quickly implementing cutting edge stuff. Plus, unicode support for readability!