Best way to model a piecewise linearization for `f(x) = x^0.8` on `[0, 1e5]` with PiecewiseLinearOpt.jl?

Hi all,

I have an optimization model where I need to replace a concave cost term
f(x) = x^{0.8} for x \in [0, 10^5] with a piecewise linear (PWL) form in JuMP. The model is a minimization, and I’m considering using PiecewiseLinearOpt.jl.

My questions are:

  1. Among the package’s formulations (:SOS2, :Logarithmic…), which would you recommend on large instances? My current guess is :DisaggLogarithmic or :Logarithmic for strong relaxations with O(\log K) binaries; does that match your experience?

  2. For a concave f in minimization, should I feed PiecewiseLinearOpt an upper envelope (tangent-based over-estimator) as the (d, f_d) breakpoints? Any pitfalls near x=0 where the slope blows up? Should I start from a small \varepsilon>0?

  3. I also have an binary decision y \in \{0,1\} with logic “if y=0, then x=0 ; if y=1, then x \in [x_{\min}, x_{\max}] and the value of f follows the PWL”. should I model the off/on disjunction myself with standard JuMP constraints?

  4. In the literature, I’ve seen a Hull reformulation: split the domain into area intervals [{\rm LO}_t, {\rm UP}_t], introduce segment binaries z_t and disaggregated variables x_t, c_t, enforce

    \sum_t z_t = y,\quad \sum_t x_t = x,\quad {\rm LO}_t z_t \le x_t \le {\rm UP}_t z_t,\quad c_t = a_t + b_t x_t,\quad \\ \text{objective function value} = \sum_t c_t.

    (So you “choose one segment” via z_t, and within that segment the objective function is linear.)
    Does PiecewiseLinearOpt provide a similar method?

Any pointers, examples, or best-practice snippets would be much appreciated. Thanks!

function piecewise_linear_sin(x̄)
    f(x) = sin(x)
    # Tip: try changing the number of points in x̂
    x̂ = range(; start = 0, stop = 2π, length = 7)
    ŷ = f.(x̂)
    n = length(x̂)
    model = Model(HiGHS.Optimizer)
    set_silent(model)
    @variable(model, 0 <= x <= 2π)
    @variable(model, y)
    @variable(model, 0 <= λ[1:n] <= 1)
    @constraints(model, begin
        x == sum(λ[i] * x̂[i] for i in 1:n)
        y == sum(λ[i] * ŷ[i] for i in 1:n)
        sum(λ) == 1
        λ in SOS2()  # <-- this is new
    end)
    @constraint(model, x == x̄)  # <-- a trivial constraint just for testing.
    optimize!(model)
    assert_is_solved_and_feasible(model)
    return value(y)
end
1 Like

Did you find out answers to any of your questions?

Yes. I’ve tried all of the methods mentioned above, and my conclusion is that they all work; the only real difference is solve time.

It’s hard to say which method is actually “better” (Convex-hull, PiecewiseLinearOpt, …), because none of them consistently wins across different problems. Even on the same problem, changing the parameters can flip which one performs best. The only thing I’ve reliably observed so far is that if the breakpoints in the piecewise linearization are strictly increasing, the :Incremental formulation can be a bit faster than the other PiecewiseLinearOpt options.

I’d also like to thank @ohmsweetohm1 for the earlier reply.

2 Likes