Out of memory error in Juniper.jl

Unfortunately I cannot practically create a MWE for this issue. I build a rather large MINLP model, it has 33,331 variables (2,218 binary), 157,260 constraints, 7,818 non linear constraints. When optimising, after 10 minutes I get this:

[2022-12-01 13:24:42] Info bb_strategies.jl L326: Breaking out of strong branching as the time limit of 600.0 seconds got reached.
ERROR: OutOfMemoryError()
  [1] sizehint!
    @ ./array.jl:1267 [inlined]
  [2] filter(f::Juniper.var"#78#79"{Vector{Int64}}, a::UnitRange{Int64})
    @ Base ./array.jl:2559
  [3] upd_gains_step!(tree::Juniper.BnBTreeObj, step_obj::Juniper.StepObj)
    @ Juniper ~/Git_repos/Juniper.jl/src/bb_gains.jl:99
  [4] upd_tree_obj!
    @ ~/Git_repos/Juniper.jl/src/BnBTree.jl:424 [inlined]
  [5] solve_sequential(tree::Juniper.BnBTreeObj, last_table_arr::Vector{Any}, time_bnb_solve_start::Float64, fields::Vector{Symbol}, field_chars::Vector{Int64}, time_obj::Juniper.TimeObj)
    @ Juniper ~/Git_repos/Juniper.jl/src/BnBTree.jl:491
  [6] solvemip(tree::Juniper.BnBTreeObj)
    @ Juniper ~/Git_repos/Juniper.jl/src/BnBTree.jl:743
  [7] optimize!(model::Juniper.Optimizer)
    @ Juniper ~/Git_repos/Juniper.jl/src/MOI_wrapper/MOI_wrapper.jl:358
  [8] optimize!
    @ ~/.julia/packages/MathOptInterface/a4tKm/src/Bridges/bridge_optimizer.jl:376 [inlined]
  [9] optimize!
    @ ~/.julia/packages/MathOptInterface/a4tKm/src/MathOptInterface.jl:87 [inlined]
 [10] optimize!(m::MathOptInterface.Utilities.CachingOptimizer{MathOptInterface.Bridges.LazyBridgeOptimizer{Juniper.Optimizer}, MathOptInterface.Utilities.UniversalFallback{MathOptInterface.Utilities.Model{Float64}}})
    @ MathOptInterface.Utilities ~/.julia/packages/MathOptInterface/a4tKm/src/Utilities/cachingoptimizer.jl:316
 [11] optimize!(model::Model; ignore_optimize_hook::Bool, _differentiation_backend::MathOptInterface.Nonlinear.SparseReverseMode, kwargs::Base.Pairs{Symbol, Union{}, Tuple{}, NamedTuple{(), Tuple{}}})
    @ JuMP ~/.julia/packages/JuMP/Z1pVn/src/optimizer_interface.jl:185
 [12] optimize!(model::Model)
    @ JuMP ~/.julia/packages/JuMP/Z1pVn/src/optimizer_interface.jl:155

Now I appreciate the size of this model could cause the OOM error, but when I check my memory pressure (I have 64GB on an M1 Max) it remains minimal and I still have ~20GB of memory remaining. Additionally, I can solve models of twice the size when solving it as an NLP with Ipopt, or as an MILP with a polyhedral relaxation and Gurobi.

These are the settings I’m using

:Gurobi => Dict(
    "crossover" => 0, "presolve" => 0, "numeric_focus" => 0
:Ipopt => Dict(
    "linear_solver" => "ma57",
    "ma57_automatic_scaling" => "yes",
    "mu_strategy" => "adaptive",
    "ma57_pivot_order"=> 2,
    "file_print_level" => 5,
    "max_iter" => 1_000_000,
    "timing_statistics" => "yes",
    "hsllib"=> _get_dl_load_path() *"/libhsl.dylib"
:Juniper => Dict(
    "branch_strategy" => :StrongPseudoCost, "strong_branching_time_limit" => 600,"processors" => nworkers(), "log_levels"=>[:Table,:Info,],"traverse_strategy"=>"DBFS"

Why am I getting on OOM error when my memory is nowhere near full?

Ping @Wikunia !

1 Like

I’ll have to check the code again for this. It definitely seems strange that you still have much memory left when checking. We might try to allocate more memory than we need. Though in general the way we store information in the tree is not optimal for these large problem sizes.

I see. Is there anything I can do in the meantime? Either a bodge fix or reducing precision?

Have you tried running it in single core mode?
Edit: oh sorry you actually use the sequential mode as I see.

My two guesses for how to improve the issue,

  1. Try running without strong branching
  2. Try running Ipopt with a better linear solver (open source linear solvers can consume a lot of memory on difficult linear systems)

For smaller problems I use multiple cores, but for larger problems where gurobi/ipopt are taking more than a few seconds to solve I find multiple cores can cause problems. I think because the solver aren’t bound to a single thread.

Thanks for your ideas:

  1. Previously I was using MostInfeasible, I think this also gave the same error but I’ll try it again.
  2. This is currently using MA57 which I don’t think is open-source? MA57 is the fastest out of MA27, MA57, MA87, and MUMPS, when solving this problem as an NLP. It’s possible its suboptimal for solving as a Juniper model.

On the point of MA57, I have set ma57_automatic_scaling=yes otherwise when solving with Juniper or as an NLP I get a lot of logs like Reallocating memory for MA57: lfact (1960656). However, I think this could be related to my problems. I have since set it to “no” and branching to :MostInfeasible. I’m just over two hours in and it’s still running, which I believe is longer than it took to get an OOM error before.

So I presume the issue is coming from MA57’s automatic rescaling, I’m guessing there’s nothing that can be done about this on the Julia side? As mentioned on a different thread I am using a dated version of MA57 so perhaps I just need to figure out how to update it.