# L1 regularization with box constraint in JuMP?

Does JuMP have L1 regularization with box constraint in following form as utility ?

minimize || Ax - b ||
subject to 1st norm of x i.e. |x| < lambda (some parameter)
and 0 <= x[i] <= kappa (some parameter)
where, A is some tall matrix and b is some tall vector

(If not, and you know any other package that supports such kind of optimization, please let me know)
I was suggested to use NormOneCone utility, but I am not aware with the syntax.

You should use `@constraint(model, [λ, x] in MOI.NormOneCone(length(x) + 1))`

3 Likes

Why not build a function to do it based on Proximal Gradient Descent / ADMM?

It takes ~10-15 lines and probably will be much faster than JuMP.

See for instance Solving LASSO (Basis Pursuit Denoising Form) with LARS and Constrained LASSO Problem - L1 Regularized Least Squares with Linear Equality Constraints (MATLAB Code is linked, but it will be easy to convert it to Julia).

2 Likes

It takes ~10-15 lines and probably will be much faster than JuMP

You could also use an ADMM solver specialized for this problem through JuMP For example https://github.com/blegat/MCPSD.jl implements an interior point solver specialized for maxcut (special case of SDP) that can be used through JuMP

2 Likes

Thank you everyone for your references, I am going through them.
Meanwhile, I found following way of approaching such kind of problem using Convex.jl
A is some matrix of `m x n` size, and b is some vector of `m x 1` size, such that `m >= n`

``````m = 100; n=5;
k = Convex.Variable(n)
problem = minimize(norm(A*x - b,2),[norm(k,1) <= λ,k >= 0, k <= κ])
Convex.solve!(problem, () -> SCS.Optimizer(max_iters = 100000, verbose = false))
``````
2 Likes