How to express an objective with L1-norm in JuMP.jl?

I learned about the NormOneCone, which can be used to add L1 constraints to a model. How to minimize the L1 norm of a vector of residuals? Tried @objective(model, Min, sum(abs.(x))) but it is not recognized by JuMP.jl.

1 Like

I’m literally adding this to the documentation right now: [docs] add more tips and tricks for linear programs by odow · Pull Request #3144 · jump-dev/JuMP.jl · GitHub :smile:

@variable(model, t)
@constraint(model, [t; x] in MOI.NormOneCone(1 + length(x)))
@objective(model, Min, t) 
1 Like

I recall jump understanding this automatically a long time ago, is there a reason it doesn’t doesn’t nowadays? If feels like a mechanical transformation that a mathematical modeling language could do for the user?

1 Like

JuMP used to recognize norm{x} as a second-order cone. But we don’t do that anymore. I don’t think we every supported abs.

Recognizing and reformulating arbitrary nonlinear expressions is a tricky business. Currently, our main problem is that we don’t have a data structure to store sum(abs.(x)). But WIP: [Nonlinear] begin experiments with NonlinearExpr by odow · Pull Request #3106 · jump-dev/JuMP.jl · GitHub is a work-in-progress.

1 Like

@odow can you please confirm the syntax for minimizing f(x) + \lambda ||x||_1? The snippet of code you shared above does not consider the L1-norm as a penalty term with penalty \lambda > 0.

Any lessons to learn from disciplined convex programming and Convex.jl in particular? It would be super nice if JuMP.jl had more sugar syntax for all kinds of terms (when that is possible).

From Tips and tricks · JuMP, t is the L1-norm, so you’d do something like:

@objective(model, Min, f(x) + lambda * t)
2 Likes

Any lessons to learn from disciplined convex programming and Convex.jl in particular? It would be super nice if JuMP.jl had more sugar syntax for all kinds of terms (when that is possible).

Convex builds an expression tree of the entire problem, with each node in the tree annotated with metadata like bounds, convexity, and monotonicity, and then it uses that information to deduce convexity of the entire problem. Because of the metadata, you’re only allowed to use a fixed number of “atoms” that Convex.jl has defined the metadata for.

It’s more of a tricky engineering challenge than a tricky conceptual challenge. At the moment, the design of Convex.jl is orthogonal to that of JuMP.

3 Likes