SLOW_PROGRESS or NUMERICAL_ERROR: a simple but nontrivial SDP example, solved with LP relaxation

Due to their poor performance on this simple test case, I propose to solve it via LP relaxation.
The crux, is to introduce a proper artificial bound.
After adding about 23 linear cuts, we converge to the global optimal value 0.0.

import JuMP, Gurobi, LinearAlgebra
function my_eigmin(A)
    valmin, vecmin = LinearAlgebra.eigen(A, 1:1)
    return first(valmin), vecmin[:, 1]
end
function optimise(model)
    JuMP.optimize!(model)
    JuMP.assert_is_solved_and_feasible(model; allow_local = false, dual = true)
end
ϵ = 1e-13 # tolerance on the smallest eigenvalue of a Symmetric matrix
ib = 9. # 🍏 pick an artificial initial bound
model = JuMP.Model(Gurobi.Optimizer) 
JuMP.@variable(model, -ib <= y <= ib); JuMP.@variable(model, -ib <= x <= ib)
JuMP.@expression(model, X, LinearAlgebra.Symmetric([0 y; y x]))
# JuMP.@constraint(model, X in PSDCone()) # We'll substitute this PSD cut by linear cuts
JuMP.@objective(model, Min, 2 * y)
for i in 0:typemax(Int) # iteratively tighten the LP relaxation
    optimise(model)
    lb = JuMP.objective_bound(model); @info "i = $i, lb = $lb"
    Xt = LinearAlgebra.Symmetric(JuMP.value.(X))
    valmin, t = my_eigmin(Xt)
    (valmin > -ϵ ? break : JuMP.@constraint(model, t' * X * t >= 0))
end
# during the above for-loop, there is only one (mild) Warning reported by Gurobi
# julia>  [ Info: i = 22, lb = -4.365443091908161e-6
# julia>  Warning for adding constraints: zero or small (< 1e-13) coefficients, ignored
optimise(model)
S = [
    -JuMP.dual(JuMP.UpperBoundRef(x)) -JuMP.dual(JuMP.UpperBoundRef(y)); 
    JuMP.dual(JuMP.LowerBoundRef(x))  JuMP.dual(JuMP.LowerBoundRef(y))
] # We observe that sensitivity == 0
# Hence we can delete our artificial bounds
JuMP.delete_upper_bound(x); JuMP.delete_upper_bound(y) 
JuMP.delete_lower_bound(x); JuMP.delete_lower_bound(y) 
optimise(model) # then we reoptimize
lb = JuMP.objective_bound(model); @info "lb = $lb"
Xt = LinearAlgebra.Symmetric(JuMP.value.(X))
valmin, t = my_eigmin(Xt)
@assert valmin >= -ϵ # double check the primal feasibility
ub = 2 * Xt[1, 2]
# ✅ here lb = ub = 0.0
# ✅ The LP relaxation method converges to global optimality

Remark: tuning ib is nontrivial, e.g. ib = 7.8 would be an improper choice, where we cannot leave for quickly.

There are 2 to-be-improved issues: smallest eigenval and Symmetric.