Arbitrary/High-precision optimisation of NLP

I have solved a non-linear program with JuMP and IPopt. I want to refine the solution, but am limited by the Float64 precision.

Therefore, I tried MadNLP and ExaModels, as they say that they support AbstarctFloat, see here, which would let me use BigFloat and arbitrary precision. However when implementing, this gave me an error. This error stays when I use Float128.

The error I get is

ERROR: AssertionError: is_supported(ipm_opt.linear_solver, T)
Stacktrace:
 [1] MadNLPSolver(nlp::ExaModel{…}; kwargs::@Kwargs{})
   @ MadNLP ~/.julia/packages/MadNLP/RwL3t/src/IPM/IPM.jl:121
 [2] MadNLPSolver
   @ ~/.julia/packages/MadNLP/RwL3t/src/IPM/IPM.jl:115 [inlined]
 [3] madnlp(model::ExaModel{…}; kwargs::@Kwargs{})
   @ MadNLP ~/.julia/packages/MadNLP/RwL3t/src/IPM/solver.jl:10
 [4] madnlp(model::ExaModel{Float128, Vector{…}, Nothing, ExaModels.Objective{…}, ExaModels.Constraint{…}})
   @ MadNLP ~/.julia/packages/MadNLP/RwL3t/src/IPM/solver.jl:9
 [5] top-level scope
   @ ~/TEST/NLPsimplified.jl:51
Some type information was truncated. Use `show(err)` to see complete types.

and the code is

using ExaModels, MadNLP, LinearAlgebra
using Quadmath

T = Float128

function delta_optimization_model(T; tuples::Vector{Tuple{T,T}})
    n = 12

    x0_vals = first.(tuples[1:n])
    y0_vals = last.(tuples[1:n])
    core = ExaCore(T)


    x = variable(core, 1:n; start=x0_vals)
    y = variable(core, 1:n; start=y0_vals)

    IJ = [(i, j) for i in 1:n for j in i+1:n]

    constraint(core,
        (x[i] - x[j])^2 + (y[i] - y[j])^2 - T(4.0) for (i, j) in IJ;
        ucon=zero(T)  
    )

    objective(core,
        log((x[i] - x[j])^2 + (y[i] - y[j])^2) for (i, j) in IJ
    )

    return ExaModels.ExaModel(core)
end

raw_tuples =[
   (0.0, 0.0),
    (2.0, 0.0),
    (1.9318729445411233, 0.5175586209793144),
    (1.9318729445411233, -0.5175586209793144),
    (0.199822139399187, -0.4824413832242706),
    (0.199822139399187, 0.4824413832242706),
    (1.6139775239283088, 0.9318303545995787),
    (1.6139775239283088, -0.9318303545995787),
    (0.6139775198744277, -0.8002204506287891),
    (0.6139775198744277, 0.8002204506287891),
    (1.0962599551249452, 1.0),
    (1.0962599551249452, -1.0)
]

# T = BigFloat
# setprecision(T, 665)  
nlp = delta_optimization_model(T; tuples=[Float128.(y) for y in raw_tuples])
println(nlp)

results = madnlp(nlp)
println(results)

(line 51 is results= madnlp(nlp))

Why does this happen?

Note that for Float64 the solver does run, but exits because Problem has too few degrees of freedom. Despite there being a lot of freedom and the initial point already being feasible.

You probably have more constraints than variables. I can’t answer about the rest :slight_smile:

I can explain the MadNLP issue simply: in order to use non Float64/Float32 floating point representations, the linear solver that MadNLP uses to solve the KKT system at each iteration needs to support the given floating point representation. In general, I know of no supported solvers (or any in general for that matter) which support BigFloat, and I believe only the latest release of the ma27 solver in the HSL suite supports 128 bit floating point arithmetic (@amontoison will know better).

In general though, I suspect you will struggle to solve even moderately sized problems with Float128 due to the fact that there is no consumer grade hardware which supports non-emulated 128 bit floating point arithmetic, which means every operation has to be emulated in software. As you might imagine this is quite slow, my guess would be at least 10x slower per fp operation, but I have not benchmarked this on a modern CPU

In particular more equality constraints than variables.

Thank you, both @cvanaret and @apozharski. There are indeed more inequality constraints than variables, but is this really a problem? Many problems have more inequality constraints than variables, e.g. here . There are no equality constraints.

I know that it might be slower to solve, but I want to refine a solution I got in Float64, so I hope that this approximate solution might make convergence faster.

I was also confused by this. However, there are equality constraints. the constraint function does not default lcon and ucon to \pm \infty as you expect, but defaults both to zero. So right now you have n^2 equality constraints and 2n variables :slight_smile:

As for solving using Float128 these lines in HSL.jl suggest that you can indeed use Float128 with the newer releases of HSL solvers, so using the e.g. linear_solver=Ma27Solver option in MadNLP would be your best bet (see this README for use instructions).

@arnerob I suggest to try MadNCL with MA27 as linear solver in Float128.
MadNCL can easily handle more constraints than variables.

@apozharski Yes, the latest release of libhsl added support of Float128 in a few linear solvers.
I don’t know any sparse solver that support BigFloat.
We should check the Cholesky in CliqueTrees.jl.

I think we still go large scale with MA27 in Float128, it will just take more time but the accuracy of the descent direction can lead to less IPM iterations.

If it’s a small enough problem that BigFloat is practical, can’t you just use a dense solver? LinearAlgebra.jl supports arbitrary precision in its dense LU solver etcetera.

1 Like

Tulip can handle arbitrary types, including BigFloat. I recommend using Double64 (from the package DoubleFloats) instead of Float128, though, as it has about the same precision and is much faster.

1 Like

MultiFloats may be even faster if you don’t need to handle infinite values

Multifloats right now is in limbo between 2.x and 3.x, which is why I don’t recommend it. DoubleFloats is slower but boringly stable.

1 Like

I totally forgot but LDLFactorizations.jl can be used in MadNLP.jl / MadIPM.jl / Tulip.jl and support any precision.