# [JuMP] Pure integer solutions (Combinatorial optimization?)

Hi,

I’m investigating the use of JuMP for a million-client scheduling service. Since there are certain hard constraints like “belongs to set” and binary variables like “either scheduled or not scheduled”, and simple constraints like “between price x and y”, I thought I should go for Integer Optimization/Combinatorial Optimization. Upon further investigation, it turns out that those approaches are NP-hard, and cannot be solved at scale; the only kind of optimization problem that can be solved at scale is mixed-integer linear programming (maybe some quadratics and conics too). However, I didn’t quite understand how it operates on discrete variables; I can apparently set the “type” of a variable to be an `Int`, and run certain supported solvers. How does this work? Isn’t discrete optimization fundamentally different MIP, warranting a different approach?

Thanks.

Hi!

These are multiple fields of study you are talking about, JuMP is just the tip of an iceberg! Combinatorial optimization is actually an older field of study from what is known as integer programming nowadays, rooted more in the old field of graph theory, but you probably shouldn’t be surprised if they get used as synonyms these days. You have to differentiate between the problem semantics (with its complexity class), and the modelling technique and algorithm used to solve such problem. Integer programming is concerned with the mathematical/algebraic modelling of a combinatorial optimization problem as an integer program (IP) and then solving it using algorithms such as LP relaxation and branch and bound. The most generic IP problem is NP-Hard because other known NP-Hard problems from graph theory can be written as special case IPs. Mixed integer programs (MIP) are at least as hard as integer programs, because every IP can be trivially written as an MIP by adding a single dummy continuous variable with 0 coefficient in every constraint. Proving that one specific problem (referring to the semantics not a particular model or algorithm) is in P, NP-Complete or any other time complexity class is a whole field of study, known as complexity theory.

You can solve your (NP-Hard) combinatorial optimization problem by first writing it as an IP and then solving the IP. But the basic algorithm for IPs is hardly used solely for NP-Hard problems. Usually it is mixed with other problem-specific and/or evolutionary heuristics to get better solutions more quickly, and/or the algorithm is modified in some other fancy way that is again problem-specific, e.g. Lagrangian relaxation, branch and price, constraint generation, etc. When the problem is in P, you can bet that there is a better algorithm out there for that specific problem which you can use directly without going through integer programming at all.

Ideally you should find some papers which solved a similar problem as the one you have and try to learn from them before walking down that path on your own. Optimization theory is a very useful but complex field of study with a lot of jargon and acronyms thrown around, so be mindful of what you are getting yourself into! Now I need a break!

3 Likes

Hi,

Thanks for the excellent explanation and pointers! My understanding is that MILP is convex, and hence solved in P. However, discrete optimization is non-convex and needs exploration of the entire search space, in the worst case; hence, NP-Hard. It’s not so cut-and-dried though, because MiniZinc can optionally use MILP solver backends for a quicker solution. I would really love to know how this works.

Formulating a problem in terms of CP is much harder than doing it in LP. I was trying to push for LP primarily because I thought CP wouldn’t scale. That is true to some extent, but benchmarks are meaningless, and running time depends heavily on the specifics of the problem, as I discovered in my few hours with mz. If I can constrain the search space enough, I should be able to handle millions of clients just fine.

I’m finding the advanced Coursera course very helpful! CP/mz is truly fascinating.

MILPs are not convex optimization problems!

Say you have an MILP with only one variable, `x`, and `x` is constrained to be an integer. Then `x = 1` is feasible, and `x = 2` is feasible, but `x = 1.5` (edited: oops ), the average of two feasible points, is not feasible. This shows that the feasible set is not convex, and hence this trivial MILP is not a convex optimization problem.

MILPs are NP-hard as well. Fortunately, solvers can often do a lot better than the worst-case performance, but the worst case is bad.

By the way, my labmate @rdeits is working on this cool JuMP extension package that automatically transforms problems with ‘implication’ style constraints into mixed-integer programs: GitHub - rdeits/ConditionalJuMP.jl: Automatic transformation of implications and complementarity into mixed-integer models in Julia (disclaimer: not affiliated with the JuMP development team). I’m not familiar with MiniZinc, but I would guess that these are the same kinds of transformations that MiniZinc performs when a mixed integer solver is used as the backend.

If that were true, we would be able to solve most of our NP-Hard problems by writing them as MILPs and then solving those instead in polynomial time, and that would prove that P = NP, so MILPs are obviously not convex. The mathematical reason for why they are not convex is given by @tkoolen above.

MiniZinc applies a number of standard transformations to change a CP to an IP if you want to use an IP solver. Some of the transformations are explained in the course and they might be in the docs too. I believe the ones used in ConditionalJuMP are a subset of MiniZinc’s, but I am glad to see Julia’s packages catching up, certainly going on my src-to-read-list!

I also recomend this course. It covers methods for designing combinatorial/discrete optimization algorithms from scratch and also how to write MIP models for those problems.

2 Likes