Hi all,
I have an optimization model where I need to replace a concave cost term
f(x) = x^{0.8} for x \in [0, 10^5] with a piecewise linear (PWL) form in JuMP. The model is a minimization, and I’m considering using PiecewiseLinearOpt.jl.
My questions are:
-
Among the package’s formulations (
:SOS2
,:Logarithmic
…), which would you recommend on large instances? My current guess is:DisaggLogarithmic
or:Logarithmic
for strong relaxations with O(\log K) binaries; does that match your experience? -
For a concave f in minimization, should I feed PiecewiseLinearOpt an upper envelope (tangent-based over-estimator) as the (d, f_d) breakpoints? Any pitfalls near x=0 where the slope blows up? Should I start from a small \varepsilon>0?
-
I also have an binary decision y \in \{0,1\} with logic “if y=0, then x=0 ; if y=1, then x \in [x_{\min}, x_{\max}] and the value of f follows the PWL”. should I model the off/on disjunction myself with standard JuMP constraints?
-
In the literature, I’ve seen a Hull reformulation: split the domain into area intervals [{\rm LO}_t, {\rm UP}_t], introduce segment binaries z_t and disaggregated variables x_t, c_t, enforce
\sum_t z_t = y,\quad \sum_t x_t = x,\quad {\rm LO}_t z_t \le x_t \le {\rm UP}_t z_t,\quad c_t = a_t + b_t x_t,\quad \\ \text{objective function value} = \sum_t c_t.(So you “choose one segment” via z_t, and within that segment the objective function is linear.)
Does PiecewiseLinearOpt provide a similar method?
Any pointers, examples, or best-practice snippets would be much appreciated. Thanks!