[JuMP] Pure integer solutions (Combinatorial optimization?)



I’m investigating the use of JuMP for a million-client scheduling service. Since there are certain hard constraints like “belongs to set” and binary variables like “either scheduled or not scheduled”, and simple constraints like “between price x and y”, I thought I should go for Integer Optimization/Combinatorial Optimization. Upon further investigation, it turns out that those approaches are NP-hard, and cannot be solved at scale; the only kind of optimization problem that can be solved at scale is mixed-integer linear programming (maybe some quadratics and conics too). However, I didn’t quite understand how it operates on discrete variables; I can apparently set the “type” of a variable to be an Int, and run certain supported solvers. How does this work? Isn’t discrete optimization fundamentally different MIP, warranting a different approach?




These are multiple fields of study you are talking about, JuMP is just the tip of an iceberg! Combinatorial optimization is actually an older field of study from what is known as integer programming nowadays, rooted more in the old field of graph theory, but you probably shouldn’t be surprised if they get used as synonyms these days. You have to differentiate between the problem semantics (with its complexity class), and the modelling technique and algorithm used to solve such problem. Integer programming is concerned with the mathematical/algebraic modelling of a combinatorial optimization problem as an integer program (IP) and then solving it using algorithms such as LP relaxation and branch and bound. The most generic IP problem is NP-Hard because other known NP-Hard problems from graph theory can be written as special case IPs. Mixed integer programs (MIP) are at least as hard as integer programs, because every IP can be trivially written as an MIP by adding a single dummy continuous variable with 0 coefficient in every constraint. Proving that one specific problem (referring to the semantics not a particular model or algorithm) is in P, NP-Complete or any other time complexity class is a whole field of study, known as complexity theory.

You can solve your (NP-Hard) combinatorial optimization problem by first writing it as an IP and then solving the IP. But the basic algorithm for IPs is hardly used solely for NP-Hard problems. Usually it is mixed with other problem-specific and/or evolutionary heuristics to get better solutions more quickly, and/or the algorithm is modified in some other fancy way that is again problem-specific, e.g. Lagrangian relaxation, branch and price, constraint generation, etc. When the problem is in P, you can bet that there is a better algorithm out there for that specific problem which you can use directly without going through integer programming at all.

For your problem, you might want to check constraint programming aka CP (closely related to logic programming) which is a nice competitor of integer programming for solving problems with semantics that are easier to describe in logic than in algebra. I believe scheduling is one of CP’s most braggable applications. This can also be mixed with heuristics to speed things up. Many scheduling problems can be reduced to the travelling salesman problem (TSP) or the resource constraint project scheduling problem (RCPSP) which are both NP-Hard but you need to double check your specific problem semantics. Read about MiniZinc and a guy named Peter Stuckey (Melbourne) who is one of the developers of MiniZinc. He has a nice online course teaching it too. CP has nothing to do with JuMP btw. All JuMP does is just help you write your mathematical program (MP), e.g. LP, IP, MILP, NLP, MINLP etc. and then send it to a compatible solver that will try to solve it. JuMP will also let you talk to the solver wherever possible and make the answer easily accessible to you. If your original problem is NP-Hard and big enough, chances are the solver will fail to return an answer in the time available to you, or it could return the best answer found which may or may not be useful. MiniZinc does a similar job as JuMP but for CPs, the default solver is Gecode.

Ideally you should find some papers which solved a similar problem as the one you have and try to learn from them before walking down that path on your own. Optimization theory is a very useful but complex field of study with a lot of jargon and acronyms thrown around, so be mindful of what you are getting yourself into! Now I need a break!



Thanks for the excellent explanation and pointers! My understanding is that MILP is convex, and hence solved in P. However, discrete optimization is non-convex and needs exploration of the entire search space, in the worst case; hence, NP-Hard. It’s not so cut-and-dried though, because MiniZinc can optionally use MILP solver backends for a quicker solution. I would really love to know how this works.

Formulating a problem in terms of CP is much harder than doing it in LP. I was trying to push for LP primarily because I thought CP wouldn’t scale. That is true to some extent, but benchmarks are meaningless, and running time depends heavily on the specifics of the problem, as I discovered in my few hours with mz. If I can constrain the search space enough, I should be able to handle millions of clients just fine.

I’m finding the advanced Coursera course very helpful! CP/mz is truly fascinating.


MILPs are not convex optimization problems!

Say you have an MILP with only one variable, x, and x is constrained to be an integer. Then x = 1 is feasible, and x = 2 is feasible, but x = 1.5 (edited: oops :slight_smile: ), the average of two feasible points, is not feasible. This shows that the feasible set is not convex, and hence this trivial MILP is not a convex optimization problem.

MILPs are NP-hard as well. Fortunately, solvers can often do a lot better than the worst-case performance, but the worst case is bad.


By the way, my labmate @rdeits is working on this cool JuMP extension package that automatically transforms problems with ‘implication’ style constraints into mixed-integer programs: https://github.com/rdeits/ConditionalJuMP.jl (disclaimer: not affiliated with the JuMP development team). I’m not familiar with MiniZinc, but I would guess that these are the same kinds of transformations that MiniZinc performs when a mixed integer solver is used as the backend.


If that were true, we would be able to solve most of our NP-Hard problems by writing them as MILPs and then solving those instead in polynomial time, and that would prove that P = NP, so MILPs are obviously not convex. The mathematical reason for why they are not convex is given by @tkoolen above.

MiniZinc applies a number of standard transformations to change a CP to an IP if you want to use an IP solver. Some of the transformations are explained in the course and they might be in the docs too. I believe the ones used in ConditionalJuMP are a subset of MiniZinc’s, but I am glad to see Julia’s packages catching up, certainly going on my src-to-read-list!


I also recomend this course. It covers methods for designing combinatorial/discrete optimization algorithms from scratch and also how to write MIP models for those problems.