The nonconvex problem (NC) is
\min_x \{L\,e^{x_2} | L:= x_1 + x_2 + 1 \ge 0, x_1 x_2 \ge -1\}
It’s clear that the objective function is “linear times convex”, which is suitable for perspectification. And the feasible region is nonconvex. Although the quadratic constraint resembles an SOC constraint, but it is not cf. here.
The most nontrivial feasible point is (-1, 0), which, after you do perspectification, a term named 0 e^{\frac{0}{0}} will occur, which is 0. But in (NC), it is 0 without doubt. (Here we have obtained a primal feasible value 0).
Next we give the code for the convex relaxation problem (CR), which will give a valid dual bound which is also 0, such that the optimality of (-1, 0) is verified.
import JuMP, MosekTools
CR = JuMP.Model(() -> MosekTools.Optimizer())
JuMP.set_silent(CR)
JuMP.@variable(CR, x[1:2])
JuMP.@variable(CR, X[eachindex(x), eachindex(x)], Symmetric);
JuMP.@variable(CR, r)
JuMP.@expression(CR, L1, sum(x) + 1); JuMP.@constraint(CR, L1 >= 0) # primal linear constr
JuMP.set_lower_bound(X[1, 2], -1) # relaxed version of the primal
JuMP.@constraint(CR, [X[1, 2] + X[2, 2] + x[2], L1, r] in JuMP.MOI.ExponentialCone())
JuMP.@objective(CR, Min, r)
JuMP.optimize!(CR); JuMP.assert_is_solved_and_feasible(CR; allow_local = false)
JuMP.objective_bound(CR) # 0.
x = JuMP.value.(x) # [-1., 0.]