SLOW_PROGRESS or NUMERICAL_ERROR: a simple but nontrivial SDP example, solved with LP relaxation

The above post is LP-relaxation to the SDP cut.
If we reformulate the SDP constraint into y^2 <= 0,
We can also approach the problem as follows

import JuMP, Gurobi
function optimise(model)
    JuMP.optimize!(model)
    JuMP.assert_is_solved_and_feasible(model; allow_local = false, dual = true)
end

# The [primal problem] is
# Min   2y
# s.t.  y^2 <= 0
# whose theoretical OPTIMAL value is 0.
# But if we use LP-relaxation to tackle it,
# we can only converge to a reluctantly feasible point `y = -0.000213623046875`
# with practical OPTIMAL value being about `-0.0004`

# 🍏 These 2 are hyperparameters
ib = 7 # initial artificial bound
COT = 1e-7 # `cut-off-tolerance`: a moderate value, Given Gurobi's behavior

model = JuMP.Model(Gurobi.Optimizer) # We employ this surrogate of the primal problem
JuMP.@variable(model, -ib <= y <= ib)
JuMP.@objective(model, Min, 2 * y)
while true
    optimise(model)
    yt = JuMP.value(y) # generate a trial point that might be infeasible to the primal problem
    if yt^2 >= 0 + COT # the cut to-be-add CAN STRONGLY cut off the current trial solution `yt`
        JuMP.@constraint(model, yt^2 + 2 * yt * (y - yt) <= 0) # attribute to the fundamental theorem of differentiable convex function
    else
        break
    end
end # 0 errors, 0 warnings
optimise(model)
yt = JuMP.value(y)
@assert yt^2 <= COT # constraint y^2 <= 0 holds WITH tolerance `COT`
@assert JuMP.dual(JuMP.UpperBoundRef(y)) == 0.0
@assert JuMP.dual(JuMP.LowerBoundRef(y)) == 0.0
# Since the above 2 lines passed, we might do the following 2 lines
JuMP.delete_upper_bound(y)
JuMP.delete_lower_bound(y)
optimise(model) # At this time, `model` is indeed an LP relaxation of the primal problem
yt = JuMP.value(y)
@assert yt^2 <= COT
lb = JuMP.objective_bound(model) # -0.00042724609375
ub = 2 * yt # -0.00042724609375
# global optimality is attained