I learned about the NormOneCone, which can be used to add L1 constraints to a model. How to minimize the L1 norm of a vector of residuals? Tried `@objective(model, Min, sum(abs.(x)))`

but it is not recognized by JuMP.jl.

Iâ€™m literally adding this to the documentation right now: [docs] add more tips and tricks for linear programs by odow Â· Pull Request #3144 Â· jump-dev/JuMP.jl Â· GitHub

```
@variable(model, t)
@constraint(model, [t; x] in MOI.NormOneCone(1 + length(x)))
@objective(model, Min, t)
```

I recall jump understanding this automatically a long time ago, is there a reason it doesnâ€™t doesnâ€™t nowadays? If feels like a mechanical transformation that a mathematical modeling language could do for the user?

JuMP used to recognize `norm{x}`

as a second-order cone. But we donâ€™t do that anymore. I donâ€™t think we every supported `abs`

.

Recognizing and reformulating arbitrary nonlinear expressions is a tricky business. Currently, our main problem is that we donâ€™t have a data structure to store `sum(abs.(x))`

. But WIP: [Nonlinear] begin experiments with NonlinearExpr by odow Â· Pull Request #3106 Â· jump-dev/JuMP.jl Â· GitHub is a work-in-progress.

@odow can you please confirm the syntax for minimizing f(x) + \lambda ||x||_1? The snippet of code you shared above does not consider the L1-norm as a penalty term with penalty \lambda > 0.

Any lessons to learn from disciplined convex programming and Convex.jl in particular? It would be super nice if JuMP.jl had more sugar syntax for all kinds of terms (when that is possible).

From Tips and tricks Â· JuMP, `t`

is the L1-norm, so youâ€™d do something like:

```
@objective(model, Min, f(x) + lambda * t)
```

Any lessons to learn from disciplined convex programming and Convex.jl in particular? It would be super nice if JuMP.jl had more sugar syntax for all kinds of terms (when that is possible).

Convex builds an expression tree of the entire problem, with each node in the tree annotated with metadata like bounds, convexity, and monotonicity, and then it uses that information to deduce convexity of the entire problem. Because of the metadata, youâ€™re only allowed to use a fixed number of â€śatomsâ€ť that Convex.jl has defined the metadata for.

Itâ€™s more of a tricky engineering challenge than a tricky conceptual challenge. At the moment, the design of Convex.jl is orthogonal to that of JuMP.