Arbitrary/High-precision optimisation of NLP

I have solved a non-linear program with JuMP and IPopt. I want to refine the solution, but am limited by the Float64 precision.

Therefore, I tried MadNLP and ExaModels, as they say that they support AbstarctFloat, see here, which would let me use BigFloat and arbitrary precision. However when implementing, this gave me an error. This error stays when I use Float128.

The error I get is

ERROR: AssertionError: is_supported(ipm_opt.linear_solver, T)
Stacktrace:
 [1] MadNLPSolver(nlp::ExaModel{…}; kwargs::@Kwargs{})
   @ MadNLP ~/.julia/packages/MadNLP/RwL3t/src/IPM/IPM.jl:121
 [2] MadNLPSolver
   @ ~/.julia/packages/MadNLP/RwL3t/src/IPM/IPM.jl:115 [inlined]
 [3] madnlp(model::ExaModel{…}; kwargs::@Kwargs{})
   @ MadNLP ~/.julia/packages/MadNLP/RwL3t/src/IPM/solver.jl:10
 [4] madnlp(model::ExaModel{Float128, Vector{…}, Nothing, ExaModels.Objective{…}, ExaModels.Constraint{…}})
   @ MadNLP ~/.julia/packages/MadNLP/RwL3t/src/IPM/solver.jl:9
 [5] top-level scope
   @ ~/TEST/NLPsimplified.jl:51
Some type information was truncated. Use `show(err)` to see complete types.

and the code is

using ExaModels, MadNLP, LinearAlgebra
using Quadmath

T = Float128

function delta_optimization_model(T; tuples::Vector{Tuple{T,T}})
    n = 12

    x0_vals = first.(tuples[1:n])
    y0_vals = last.(tuples[1:n])
    core = ExaCore(T)


    x = variable(core, 1:n; start=x0_vals)
    y = variable(core, 1:n; start=y0_vals)

    IJ = [(i, j) for i in 1:n for j in i+1:n]

    constraint(core,
        (x[i] - x[j])^2 + (y[i] - y[j])^2 - T(4.0) for (i, j) in IJ;
        ucon=zero(T)  
    )

    objective(core,
        log((x[i] - x[j])^2 + (y[i] - y[j])^2) for (i, j) in IJ
    )

    return ExaModels.ExaModel(core)
end

raw_tuples =[
   (0.0, 0.0),
    (2.0, 0.0),
    (1.9318729445411233, 0.5175586209793144),
    (1.9318729445411233, -0.5175586209793144),
    (0.199822139399187, -0.4824413832242706),
    (0.199822139399187, 0.4824413832242706),
    (1.6139775239283088, 0.9318303545995787),
    (1.6139775239283088, -0.9318303545995787),
    (0.6139775198744277, -0.8002204506287891),
    (0.6139775198744277, 0.8002204506287891),
    (1.0962599551249452, 1.0),
    (1.0962599551249452, -1.0)
]

# T = BigFloat
# setprecision(T, 665)  
nlp = delta_optimization_model(T; tuples=[Float128.(y) for y in raw_tuples])
println(nlp)

results = madnlp(nlp)
println(results)

(line 51 is results= madnlp(nlp))

Why does this happen?

Note that for Float64 the solver does run, but exits because Problem has too few degrees of freedom. Despite there being a lot of freedom and the initial point already being feasible.

You probably have more constraints than variables. I can’t answer about the rest :slight_smile: