Solver attributes and set_optimizer with ParametricOptInterface.jl and JuMP.jl

Hi,

I could really use ParametricOptInterface.jl (POI) for a library I’m developing (I’ll make an announcement at some point), but I’ve run into a few issues.

  1. I don’t know how to provide solver-specific attributes, and there isn’t any documentation on how do to so on either repository.
  2. It appears that wrapping a solver in the POI type does not support set_optimizer. The problems are quite varied, and have a lot of options that users can tweak, so it makes sense to build the model first and attach the solver later, but it seems this is impossible with POI.

Is there a way to do either of these things? Here’s the function and data structure that iterates over solvers. Perhaps there are changes I can make to support POI.

struct Solver{T1 <: Union{Symbol, <:AbstractString}, T2,
              T3 <: Union{Nothing, <:AbstractDict}, T4 <: NamedTuple, T5 <: Bool}
    name::T1
    solver::T2
    settings::T3
    check_sol::T4
    add_bridges::T5
end
function Solver(; name::Union{Symbol, <:AbstractString} = "", solver::Any = nothing,
                settings::Union{Nothing, <:AbstractDict} = nothing,
                check_sol::NamedTuple = (;), add_bridges::Bool = true)
    return Solver{typeof(name), typeof(solver), typeof(settings), typeof(check_sol),
                  typeof(add_bridges)}(name, solver, settings, check_sol, add_bridges)
end
Base.iterate(S::Solver, state = 1) = state > 1 ? nothing : (S, state + 1)
struct JuMPResult{T1 <: AbstractDict, T2 <: Bool}
    trials::T1
    success::T2
end
function JuMPResult(; trials::AbstractDict, success::Bool)
    if !success
        @warn("Model could not be solved satisfactorily.\n$trials")
    end
    return JuMPResult{typeof(trials), typeof(success)}(trials, success)
end
function optimise_JuMP_model!(model::JuMP.Model,
                              slv::Union{<:Solver, <:AbstractVector{<:Solver}})
    trials = Dict()
    success = false
    for solver ∈ slv
        name = solver.name
        solver_i = solver.solver
        settings = solver.settings
        add_bridges = solver.add_bridges
        check_sol = solver.check_sol
        set_optimizer(model, solver_i; add_bridges = add_bridges)
        if !isnothing(settings) && !isempty(settings)
            for (k, v) ∈ settings
                set_attribute(model, k, v)
            end
        end
        try
            JuMP.optimize!(model)
        catch jump_error
            push!(trials, name => Dict(:jump_error => jump_error))
            continue
        end
        try
            assert_is_solved_and_feasible(model; check_sol...)
            success = true
            break
        catch err
            push!(trials,
                  name => Dict(:objective_val => objective_value(model), :err => err,
                               :settings => settings))
        end
    end
    return JuMPResult(; trials = trials, success = success)
end

I’ve worked my way around it already but it would still be nice to know. I’ve resorted to deleting and unregistering constraints, variables and expressions. It works fine, but it would be nice to simplify things somewhat.

Hi @dcelisgarza welcome to the forum :smile:

Can you provide a reproducible example of your code that i can run? I dont see where POI is involved in your current snippet.

Thank you thank you. Big fan of your work.

I can provide the code, but it’s not minimal at all :sweat_smile:. So i’ll write a trivial toy example in the style of the POI examples. In fact, it’s for a portfolio optimisation library, and within the library the problem that would benefit is the generation of generalised pareto fronts (risk-return frontier, but for more than one risk measure).

I’m about to make dinner so it’ll be in a bit.

1 Like

I assume that you’ve seen this tutorial?

Yes, I have. I’ve not gone into multi-objective stuff, but I may at a later point. Instead this is more like skfolio or riskfolio-lib, but i have added the capability to add multiple risk measures to the objective and scalarise them in different ways à la cvxpy.

In my case the pareto frontier is implemented as a range of lower bounds on returns or upper bounds on risk. Doing so on risk lets me create pareto surfaces, volumes and hypervolumes. This is where it would be nice to use POI, so I don’t have to delete the risk bounds and objective expression.

As you can see, i’ve solved the problem so this is more for purity of art.

1 Like

Well, it appears this is not an issue of POI but of specific solvers. I got it to work with HiGHS, but not Clarabel. Unfortunately Clarabel is my workhorse since it supports the wide range of constraints I need. Sometimes i combine their powers using Pajarito (for MIP constraints) and that works great.

Right, so here’s some minimal working code. In this case POI is total overkill, but in my actual code the bounds are multiplied by either 1 or a variable (required in a transformation when optimising the risk return ratio). This makes the bound variable a QuadExpr.

using JuMP, ParametricOptInterface, Clarabel, HiGHS

# This struct stores the solver name (for logging purposes), solver,
# solver-specific settings/attributes, kwargs for deciding whether a
# solution is acceptable, and solver bridges for completeness.
struct Solver{T1 <: Union{Symbol, <:AbstractString}, T2,
              T3 <: Union{Nothing, <:AbstractDict}, T4 <: NamedTuple, T5 <: Bool}
    name::T1
    solver::T2
    settings::T3
    check_sol::T4
    add_bridges::T5
end
function Solver(; name::Union{Symbol, <:AbstractString} = "", solver::Any = nothing,
                settings::Union{Nothing, <:AbstractDict} = nothing,
                check_sol::NamedTuple = (;), add_bridges::Bool = true)
    if !isnothing(settings)
        @assert(!isempty(settings))
    end
    return Solver{typeof(name), typeof(solver), typeof(settings), typeof(check_sol),
                  typeof(add_bridges)}(name, solver, settings, check_sol, add_bridges)
end
# In case you provide a solver on its own rather than a vector.
Base.iterate(S::Solver, state = 1) = state > 1 ? nothing : (S, state + 1)
struct JuMPResult{T1 <: AbstractDict, T2 <: Bool}
    trials::T1
    success::T2
end
# Logging results.
function JuMPResult(; trials::AbstractDict, success::Bool)
    if !success
        @warn("Model could not be solved satisfactorily.\n$trials")
    end
    return JuMPResult{typeof(trials), typeof(success)}(trials, success)
end
# This function optimises the model. In the actual use case the models can be quite
# complex so it's better to add the optimisers after building and returning the first one
# to successfully solve the problem, or return a failure code if no solution is found. 
# It logs failures as well.
function optimise_JuMP_model!(model::JuMP.Model,
                              slv::Union{<:Solver, <:AbstractVector{<:Solver}})
    trials = Dict() # Stores optimisation failures.
    success = false # Success/failure flag.
    for solver ∈ slv
        name = solver.name
        solver_i = solver.solver
        settings = solver.settings
        add_bridges = solver.add_bridges
        check_sol = solver.check_sol
        # It appears this doesn't work with `POI`
        try
            set_optimizer(model, solver_i; add_bridges = add_bridges)
        catch set_optimiser_err
            push!(trials, name => Dict(:set_optimiser => set_optimiser_err))
            continue
        end
        # I don't know how to provide solver settings with `POI`.
        if !isnothing(settings)
            for (k, v) ∈ settings
                set_attribute(model, k, v)
            end
        end
        try
            JuMP.optimize!(model)
        catch jump_error
            push!(trials, name => Dict(:jump_error => jump_error))
            continue
        end
        try
            assert_is_solved_and_feasible(model; check_sol...)
            success = true
            break
        catch err
            push!(trials,
                  name => Dict(:objective_val => objective_value(model), :err => err,
                               :settings => settings))
        end
    end
    return JuMPResult(; trials = trials, success = success)
end
function trivial_model(slv)
    model = JuMP.Model()
    @variables(model, begin
                   x
                   ub in Parameter(5)
               end)
    @constraint(model, x <= ub)
    @objective(model, Max, x)
    optimise_JuMP_model!(model, slv)
    return model
end

I normally do something like this.

# Vector of solvers because sometimes problems need specific settings
# that are hard to know beforehand, so it's better to provide a variety
# of options that are iterated over.
slv = [Solver(; name = :clarabel1, solver = Clarabel.Optimizer,
              check_sol = (; allow_local = true, allow_almost = true),
              settings = Dict("verbose" => false, "max_step_fraction" => 0.75)),
       Solver(; name = :highs1, solver = HiGHS.Optimizer,
              check_sol = (; allow_local = true, allow_almost = true),
              settings = Dict("log_to_console" => false))]
trivial_model(slv)
value(model[:x]) # 5.0

What I was trying to do was something like this, but I couldn’t get it to work until trying it with HiGHS

slv = [
       # The problem is apparently on the solver side.
       Solver(; name = :clarabel1,
              solver = ParametricOptInterface.Optimizer(Clarabel.Optimizer()),
              check_sol = (; allow_local = true, allow_almost = true),),
       Solver(; name = :clarabel2,
              solver = () -> ParametricOptInterface.Optimizer(Clarabel.Optimizer()),
              check_sol = (; allow_local = true, allow_almost = true),
              settings = Dict("verbose" => false, "max_step_fraction" => 0.75)),
       # Doesn't work because of invalid constructor, it may work in the examples
       # but not with `set_optimizer`. Maybe adding this behaviour to the docs 
       # would be helpful.
       Solver(; name = :highs1,
              solver = ParametricOptInterface.Optimizer(HiGHS.Optimizer()),
              check_sol = (; allow_local = true, allow_almost = true)),
       # Same story as above.
       Solver(; name = :highs2,
              solver = ParametricOptInterface.Optimizer(HiGHS.Optimizer()),
              check_sol = (; allow_local = true, allow_almost = true),
              settings = Dict("log_to_console" => false)),
       Solver(; name = :highs3,
              # This works.
              solver = () -> ParametricOptInterface.Optimizer(HiGHS.Optimizer()),
              check_sol = (; allow_local = true, allow_almost = true),
              settings = Dict("log_to_console" => false))]
result, model = trivial_model(slv)
result.success # true
# All the failures.
display(result.trials)

# Invalid constructor, doing what the error code suggests works.
println(result.trials[:highs1][:set_optimiser].msg)

# Looks like `optimize!(dest::AbstractOptimizer, src::ModelLike)` is what I need. I'm not sure how this would work but I can experiment.
println(result.trials[:clarabel2][:jump_error].msg)
value(model[:x]) # 5.0

I’ll mark this as solved, but perhaps it’s worth adding this to the docs so others don’t stumble on the same obstacle.