Do you have a reproducible example? This is probably a bug in Clarabel. What happens if you use Ipopt?
Note that your objective contains very small coefficients. Clarabel probably encounters numerical issues and ignores the small values. What happens if you re-scale all terms in the objective by +1e8?
Ipopt behaved the same as Gurobi, but I would like to know what Clarabel’s problem is, because I’m testing different solvers.
********************************************************************
This program contains Ipopt, a library for large-scale nonlinear optimization.
Ipopt is released as open source code under the Eclipse Public License (EPL).
For more information visit https://github.com/coin-or/Ipopt
********************************************************************
This is Ipopt version 3.14.4, running with linear solver MUMPS 5.5.1.
Number of nonzeros in equality constraint Jacobian...: 0
Number of nonzeros in inequality constraint Jacobian.: 721
Number of nonzeros in Lagrangian Hessian.............: 222912
Total number of variables............................: 721
variables with only lower bounds: 0
variables with lower and upper bounds: 0
variables with only upper bounds: 0
Total number of equality constraints.................: 0
Total number of inequality constraints...............: 721
inequality constraints with only lower bounds: 721
inequality constraints with lower and upper bounds: 0
inequality constraints with only upper bounds: 0
iter objective inf_pr inf_du lg(mu) ||d|| lg(rg) alpha_du alpha_pr ls
0 2.5200000e+03 0.00e+00 5.13e-01 -1.0 0.00e+00 - 0.00e+00 0.00e+00 0
1 2.5199054e+03 0.00e+00 3.54e-02 -1.7 3.03e-02 - 9.65e-01 1.00e+00f 1
47 1.1950969e+02 0.00e+00 2.51e-14 -8.6 7.98e-01 - 1.00e+00 1.00e+00f 1
48 1.1950969e+02 0.00e+00 9.09e-15 -9.0 3.75e-01 - 1.00e+00 1.00e+00f 1
Number of Iterations....: 48
(scaled) (unscaled)
Objective...............: 1.1950968677914057e+02 1.1950968677914057e+02
Dual infeasibility......: 9.0910352162909547e-15 9.0910352162909547e-15
Constraint violation....: 0.0000000000000000e+00 0.0000000000000000e+00
Variable bound violation: 0.0000000000000000e+00 0.0000000000000000e+00
Complementarity.........: 3.2345284045149366e-09 3.2345284045149366e-09
Overall NLP error.......: 3.2345284045149366e-09 3.2345284045149366e-09
Number of objective function evaluations = 49
Number of objective gradient evaluations = 49
Number of equality constraint evaluations = 0
Number of inequality constraint evaluations = 49
Number of equality constraint Jacobian evaluations = 0
Number of inequality constraint Jacobian evaluations = 1
Number of Lagrangian Hessian evaluations = 1
Total seconds in IPOPT = 2.797
When optimizing Clarabel, it does not show the value of the objective function, which in general in other solvers is the last iteration. It was necessary to use a
Is the issue here that you want the Clarabel solver to report the final objective value in the output, or that the solver is not reporting the same optimal value as some other solver?
For the first case, you are right that Clarabel is not printing out the final objective. If the problem solves though, then the objective will be the final entry in the “pcost” column (i.e. last row, first entry after the iteration number). We could add it to the output as well if you make a feature request on github.
For the second case : I don’t know of any other examples where the solver is reporting a high accuracy solution but then getting a totally different optimal value than some other solver. The solver doesn’t ignore or zero out small entries in the objective or anything.
If you have a reproducible example then we would appreciate you filing it as a bug report.