I have a JuMP model with about 0.2 billion variables. Gurobi reports 50 GB of memory consumption and htop reports consistent 350 GB of memory consumption during solving.
I’m most curious about the excessive memory consumption between htop (total process) and gurobi (model itself). Why is there a 7x difference?
I’m not sure if there is anything actionable to do here. You’re solving a large MIP. It takes a lot of memory. I can’t see the code that you are running.
Well, that means I can’t figure out how much memory consumption the model takes. BTW, turning off the string names do help reduce some memory consumption.
I thought the number htop reports is a measure for the total process including data loading, model construction and solving. But since htop reports dynamic memory consumption, I guess the number during solving represents the memory consumtion of the model.