SVMs are awesome I don’t care what anyone says :).
I’ve never seen LassoELM, but in reservoir learning its very common (say in echo state networks) to introduce sparsity in the weights. In general sparsity is good anyways :). LS-SVM is basically a short-cut to SVM’s, and they usually offer less sparsity than true SVMs do. That said, optimizing LS-SVM only involves linear regression so - it’s much faster. And also, undergraduates can write the code themselves! Maybe not understand the theory, but hey, maybe now days they could.
I wrote a projected gradient descent LASSO in a gist somewhere… But I think there are probably dozens of Lasso’s regressions out there in the Julia ecosystem now days :). Here’s a nice package for the classic way of doing LASSO: https://github.com/kul-forbes/ProximalOperators.jl/blob/master/demos/lasso.jl
Anyways, yea let us know if you have any troubles.