JuMP: Piecewise linear cost function

I am trying to solve an optimization problem with a piecewise linear cost function, which writes as:
\ell^{max}(z) := \max_{\omega\in\Omega} \ell(z+\omega)
where the cost function is piecewise continous, e.g. \ell(z) = max(2z, -10z).

My minimal working example looks as follows:

using Plots
using JuMP
using Ipopt

function lmax(z)
    m = Model(with_optimizer(Ipopt.Optimizer, print_level=0))
    @variable(m, -1.0 <= ω <= 1.0)
    l(x) = max.(2*x, -10*x)
    JuMP.register(m, :l, 1, l, autodiff=true)
    @NLobjective(m, Max, l(z+ω))
    optimize!(m)
    return objective_value(m)
end

x = -5:0.1:5
y = max.(2*x, -10*x)
ymax= lmax.(x)
plot(x,y, label="l")
scatter!(x,ymax, label="l_max optmized")

The result for this optimization differs from the expected result, as shown in this plot.
fig

Should this work in general? If so, what did I do wrong? I have the feeling, that I messed up some basic optimization principles, but I cannot figure out what.

How could I implement a piecewise linear cost function in julia or JuMP otherwise?

Your problem is non-convex, so Ipopt isn’t guaranteed to find the global optimum.

Generally, these kinds of problems are better solved using mixed integer solvers. Take a look at

https://github.com/joehuchette/PiecewiseLinearOpt.jl

2 Likes

In addition to being non-convex, it’s also non-smooth. Ipopt assumes that the objective and constraints are twice continuously differentiable: https://www.coin-or.org/Ipopt/documentation/node3.html#eq:obj. Ipopt may work in the presence of non-smoothness, but I’m not aware of any algorithmic guarantees.

2 Likes

Thanks for the clarification and the references. I will try a mixed-integer solver.