Arbitrary precision refinnements with Convex.jl & COSMO.jl

Hi,

I’m solving a problem through Convex.jl & COSMO.jl. I want to build a BigFloat version of the same problem to refine the solution, and i’d like to warm start it with the low-precision solution. For the dummy problem that I expose here, it might not make sense, but for some kind of hard QP or hard SOCP that i deal with, it does.

Here is what i tried yet:

using Convex, COSMO


y = Variable(3)
x = [1.0,2,3]
p = minimize(Convex.norm(y - x,2) + Convex.norm(y,1))
solve!(p, () -> COSMO.Optimizer(verbose=true),warmstart=true)

# Now find a better value with BigFloats (or other multi-precision type)
BigType = BigFloat # Or DoubleFloats.Double64, or MultiFloats.Float64x4...
x = BigType.(x)
set_value!(y,BigType.(Convex.evaluate(y)))
opt = COSMO.Optimizer{BigType}(verbose=true)
p = minimize(Convex.norm(y - x,2) + Convex.norm(y,1))
solve!(p, () -> opt,warmstart=true) # Errors with the following stacktrace:
julia> solve!(p, () -> opt,warmstart=true)
ERROR: MethodError: no method matching warm_start_primal!(::COSMO.Workspace{BigFloat}, ::Float64, ::Int64)
Closest candidates are:
  warm_start_primal!(::COSMO.Workspace{T}, ::T, ::Integer) where T at C:\Users\u009192\.julia\packages\COSMO\Sb5eV\src\interface.jl:130     
  warm_start_primal!(::COSMO.Workspace{T}, ::Vector{T}, ::Union{Nothing, UnitRange{Int64}}) where T<:AbstractFloat at C:\Users\u009192\.juli
a\packages\COSMO\Sb5eV\src\interface.jl:128
  warm_start_primal!(::COSMO.Workspace{T}, ::AbstractVector{T}) where T<:AbstractFloat at C:\Users\u009192\.julia\packages\COSMO\Sb5eV\src\i
nterface.jl:133
Stacktrace:
  [1] load(optimizer::Optimizer{BigFloat}, a::MathOptInterface.VariablePrimalStart, vi::MathOptInterface.VariableIndex, value::Float64)     
    @ COSMO ~\.julia\packages\COSMO\Sb5eV\src\MOI_wrapper.jl:720
  [2] load(model::Optimizer{BigFloat}, attr::MathOptInterface.VariablePrimalStart, indices::Vector{MathOptInterface.VariableIndex}, values::
Vector{Union{Nothing, Float64}})
    @ MathOptInterface.Utilities ~\.julia\packages\MathOptInterface\1EYfq\src\Utilities\copy.jl:917
  [3] _pass_attributes(dest::Optimizer{BigFloat}, src::MathOptInterface.Utilities.UniversalFallback{MathOptInterface.Utilities.GenericModel{
Float64, MathOptInterface.Utilities.ModelFunctionConstraints{Float64}}}, idxmap::MathOptInterface.Utilities.IndexMap, attrs::Vector{MathOptI
nterface.AbstractVariableAttribute}, supports_args::Tuple{DataType}, get_args::Tuple{Vector{MathOptInterface.VariableIndex}}, set_args::Tupl
e{Vector{MathOptInterface.VariableIndex}}, pass_attr!::Function)
    @ MathOptInterface.Utilities ~\.julia\packages\MathOptInterface\1EYfq\src\Utilities\copy.jl:300
  [4] pass_attributes!(dest::Optimizer{BigFloat}, src::MathOptInterface.Utilities.UniversalFallback{MathOptInterface.Utilities.GenericModel{
Float64, MathOptInterface.Utilities.ModelFunctionConstraints{Float64}}}, idxmap::MathOptInterface.Utilities.IndexMap, pass_attr::Function)  
    @ COSMO ~\.julia\packages\COSMO\Sb5eV\src\MOI_wrapper.jl:571
  [5] pass_attributes!
    @ ~\.julia\packages\COSMO\Sb5eV\src\MOI_wrapper.jl:559 [inlined]
  [6] copy_to(dest::Optimizer{BigFloat}, src::MathOptInterface.Utilities.UniversalFallback{MathOptInterface.Utilities.GenericModel{Float64, 
MathOptInterface.Utilities.ModelFunctionConstraints{Float64}}}; copy_names::Bool)
    @ COSMO ~\.julia\packages\COSMO\Sb5eV\src\MOI_wrapper.jl:148
  [7] attach_optimizer(model::MathOptInterface.Utilities.CachingOptimizer{Optimizer{BigFloat}, MathOptInterface.Utilities.UniversalFallback{
MathOptInterface.Utilities.GenericModel{Float64, MathOptInterface.Utilities.ModelFunctionConstraints{Float64}}}})
    @ MathOptInterface.Utilities ~\.julia\packages\MathOptInterface\1EYfq\src\Utilities\cachingoptimizer.jl:185
  [8] optimize!(m::MathOptInterface.Utilities.CachingOptimizer{Optimizer{BigFloat}, MathOptInterface.Utilities.UniversalFallback{MathOptInte
rface.Utilities.GenericModel{Float64, MathOptInterface.Utilities.ModelFunctionConstraints{Float64}}}})
    @ MathOptInterface.Utilities ~\.julia\packages\MathOptInterface\1EYfq\src\Utilities\cachingoptimizer.jl:248
  [9] optimize!
    @ ~\.julia\packages\MathOptInterface\1EYfq\src\Bridges\bridge_optimizer.jl:293 [inlined]
 [10] solve!(problem::Problem{Float64}, optimizer::Optimizer{BigFloat}; check_vexity::Bool, verbose::Bool, warmstart::Bool, silent_solver::B
ool)
    @ Convex ~\.julia\packages\Convex\O6nZN\src\solution.jl:243
 [11] solve!(problem::Problem{Float64}, optimizer_factory::Function; kwargs::Base.Iterators.Pairs{Symbol, Bool, Tuple{Symbol}, NamedTuple{(:
warmstart,), Tuple{Bool}}})
    @ Convex ~\.julia\packages\Convex\O6nZN\src\solution.jl:192
 [12] top-level scope
    @ REPL[662]:1

julia> 

Looks like it’s trying to warm start with Float64s, although y contains BigFloat values. I’m guessing I need to set other things to BigFloats but I cant understand which ones.

Could someone guide me through it ?

1 Like

I believe doing

p = minimize(Convex.norm(y - x,2) + Convex.norm(y,1); numeric_type = BigFloat)

instead might be enough.

1 Like

Indeed, it makes the code run. But it does not warmstart, which correspond to the behavior we already had on this other thread. I’ll post an issue on COSMO.

1 Like

Thanks for opening the issue.

I think your MWE is not ideal, because the optimal solution of the first problem is y*=[0,0,0], which is the initialization value for a new variable. So warm starting would not do anything.

However, it seems to not work even if you change the initial problem to:

p = minimize(Convex.norm(y - x,2) + 0.6 * Convex.norm(y,1))

which has optimal solution y*=[0,0.866,1.866].


I printed out the internal variables of COSMO, and they are indeed warm-started. I am not sure, why the later iterates (and residual norms) seem to be the same.
One issue might be that the problem that COSMO gets from Convex.jl has 8 primal variables (some of them are dummys) and we are only warm-starting 3 of them (and no duals).

Will have to take a closer look.

2 Likes

Then a nice check would be to try to warm start those variables as well.
@ericphanson 2 questions:

  • How could i access those dummy variables values, together with the dual values, so that i can upgrade them to bigfloat ?
  • I think that Convex.jl does not currently allow for warm starting duals, but what about these dummys ?

Sorry about the crappy MWE.

How could i access those dummy variables values, together with the dual values, so that i can upgrade them to bigfloat ?

They should all be BigFloats, I think. When you do numeric_type=BigFloat, Convex uses a MOI Model with element type BigFloat, and I think that enforces bigfloats everywhere (which is why when you didn’t specify it, you got errors-- it was enforcing Float64 everywhere). However, I don’t think there’s an easy way to get at all of the variables. After solving a problem, you can do problem.model to get the MOI Model it used, which has everything that was sent to the solver, but it can be kind of complicated.

I think that Convex.jl does not currently allow for warm starting duals, but what about these dummys ?

I thought that Convex tries to restart all primal variables including the dummies, and that was the intention when I wrote that bit of code, but I guess it does not. The code in question is here: https://github.com/jump-dev/Convex.jl/blob/c2952775f1c25703d4f15844b0435e9c2908fb51/src/solution.jl#L255-L268. In that function, id_to_variables should be a dictionary holding all of the variables used in the problem, including the dummy ones. But perhaps something is going wrong somewhere…

Indeed, it enforces the type on the data. In facts, this version of the MWE is enough:

using Convex, COSMO
y = Variable(3);
x = [1.0,2,3];
p1 = minimize(Convex.norm(y - x,2) + 0.6 * Convex.norm(y,1));
solve!(p1, () -> COSMO.Optimizer(verbose=true))

eps = BigFloat("1e-100");
p2 = minimize(Convex.norm(y - x,2) + 0.6 * Convex.norm(y,1); numeric_type = BigFloat);
solve!(p2, () -> COSMO.Optimizer{BigFloat}(verbose=true,eps_abs=eps,eps_rel=eps), warmstart=true)

However, I do not understand why it does not warmstart as it should: if the dummys are warmstrated as requested, maybe the duals are not ?

Yeah, the duals are not. It’s a missing feature in Convex.jl.

EDIT: I found a workaround. The following code works:

using Convex, COSMO
y = Variable(3);
x = [1.0,2,3];
p1 = minimize(Convex.norm(y - x,2) + 0.6 * Convex.norm(y,1));
solve!(p1, () -> COSMO.Optimizer(verbose=true))


BT = BigFloat

v_x = BT.(p1.model.model.optimizer.inner.vars.x)
v_μ = BT.(p1.model.model.optimizer.inner.vars.μ)
v_s = BT.(p1.model.model.optimizer.inner.vars.s)
v_y = - v_μ

eps = BT("1e-100");
p2 = minimize(Convex.norm(y - x,2) + 0.6 * Convex.norm(y,1); numeric_type = BigFloat);
solve!(p2, () -> COSMO.Optimizer{BigFloat}(verbose=true,eps_abs=eps,eps_rel=eps,max_iter=1))

cosmo_model = p2.model.model.optimizer.inner

COSMO.warm_start_primal!(cosmo_model,v_x)
COSMO.warm_start_dual!(cosmo_model,v_y)
COSMO.warm_start_slack!(cosmo_model,v_s)

cosmo_model.settings.max_iter = 10000
COSMO.optimize!(cosmo_model)

By extracting the COSMO model object from MOI and Convex layers I was able to warm start it manually, and it worked correctly. Although, this forces me to get out of Convex infrastructure, and this is a little ugly, it works ATM.

2 Likes

A less “hacky” way of warmstarting would be using the set_value! function provided by the Convex.jl package. Then, you can just set the warmstart flag to true when calling solve!:

set_value!(y, y0)
p2 = minimize(Convex.norm(y - x,2) + 0.6 * Convex.norm(y,1); numeric_type = BigFloat);
solve!(p2, () -> COSMO.Optimizer{BigFloat}(verbose=true,eps_abs=eps,eps_rel=eps,max_iter=1);
 warmstart=true)