We can also employ a global optimizer to do the same task, the correct code should be like this
import JuMP, MosekTools
CP = JuMP.Model(() -> MosekTools.Optimizer())
JuMP.@variable(CP, t)
JuMP.@variable(CP, x >= 1e-6)
JuMP.@constraint(CP, [-t, x, 1] in JuMP.MOI.ExponentialCone())
JuMP.@objective(CP, Min, t) # minimize `x -> x * log(x)` over positive reals
JuMP.optimize!(CP)
JuMP.assert_is_solved_and_feasible(CP; allow_local = false)
A direct extension of this is to solve a (nonconvex) problem that has a “linear times convex” objective. e.g. this nonconvex problem (NC):
\min_{x_1 > 0, x_2} \{x_1 x_2 \log x_1 \; | \; 0 \le x_2 \le 2 \}
The code example of a convex relaxation (CR) that can solve (NC) to global optimality is as follows:
import JuMP, MosekTools
CR = JuMP.Model(() -> MosekTools.Optimizer()); JuMP.set_silent(CR);
JuMP.@variable(CR, t);
JuMP.@variable(CR, x[1:2]);
JuMP.set_lower_bound(x[1], 1e-6);
JuMP.set_lower_bound(x[2], 0); JuMP.set_upper_bound(x[2], 2);
JuMP.@variable(CR, X[eachindex(x), eachindex(x)], Symmetric);
# JuMP.set_lower_bound(X[1, 2], 0)
JuMP.@constraint(CR, [-t, X[1, 2], x[2]] in JuMP.MOI.ExponentialCone());
JuMP.@constraint(CR, X[1, 2] <= 2 * x[1]); # VI
JuMP.@objective(CR, Min, t);
JuMP.optimize!(CR)
JuMP.assert_is_solved_and_feasible(CR; allow_local = false)
x = JuMP.value.(x) # [0.3678829664527772, 2.0]
ub = x[2] * x[1] * log(x[1]) # -0.735758882309103
lb = JuMP.objective_bound(CR) # -0.735758881911808
In this example, we again discover the fact that:
although the lb might become tighter after adding the SDP cut, the primal feasible solution could nonetheless deteriorate. Therefore it is surely advantageous to collect primal candidate solutions along the way.
A related caveat about the function x -> x * log(x)
is here.