Resolving Almost Optimal solution in Clarabel (Tulip does well)

I have a (for the moment) linear minimisation problem, but there is a related non-linear problem I wish to solve.
Its a branch and bound problem, and the range of values is quite plausibly going to cause numerical problems, so I am working in BigFloat.

Using Clarabel I obtained (rounded to Float32 for posting)
status: ALMOST_OPTIMAL
objective: t = 3.428384f-24
empirically largest errors are -6.9373f-18, 4.34527f-18

Since this is a linear problem, I also tried Tulip, obtaining
status: OPTIMAL
objective: t = 4.954849f-18
empirically largest +ve & -ve errors are not identical, but match objective to 4 sig figs

Any good suggestions to investigate/improve this ?

thanks,

Aside: I noticed that Clarabel/Tulip use different Julia implementations of same paper (by Timothy A. Davis).
(GitHub - oxfordcontrol/QDLDL.jl: A free LDL factorisation routine (Julia implementation) vs GitHub - JuliaSmoothOptimizers/LDLFactorizations.jl: Factorization of Symmetric Matrices)
Is changing the decomposition library likely to make a difference ?

Clarabel supports using direct_solve_method = :cholmod well, so you could try that instead. I doubt that using a different factorisation method is going to make a difference in any consistent way though.

It’s possible that the two solvers have different termination status because of different tolerances, or slightly different stopping criteria. It’s notable in particular that your optimal objective value is so close to zero – perhaps there is a difference in the way that absolute and relative stopping tests are implemented.

1 Like

I’m working in BigFloat (because some values get so small) and if I try :cholmod it gives an error message about requiring Float64 (or ComplexF64).
Using Tulip, I was just using default tolerances – according to the docs this is sqrt(eps)
For Clarabel, I noticed it was hitting absolute tolerances with small objective so I set the absolute tolerances to zero, and the relative to 8 * eps but I’m quite happy to admit my approach was brute force & ignorance, and I suspect the ignorance is a problem.

   model = GenericModel{MT}( Clarabel.Optimizer{MT} )
    tolerance = eps(MT) * 2 # default tolerance is very sloppy
    set_attribute(model, "tol_infeas_rel", tolerance) # clarabel
    set_attribute(model, "tol_infeas_abs", zero(MT)) # zero ABS
    set_attribute(model, "tol_feas", tolerance)
    set_attribute(model, "tol_gap_abs", zero(MT))  # zero ABS
    set_attribute(model, "tol_gap_rel", tolerance) 
    set_attribute(model, "iterative_refinement_abstol", zero(MT))  # zero ABS
    set_attribute(model, "iterative_refinement_reltol", tolerance) 

Sorry - misunderstood. Yes, if you are using BigFloat then the default linear solver is the only supported option that will work.