Seeking a smart way to formulate an optimization problem

I recently received tremendous help from this forum in setting up a JuMP optimization problem. I have a similar, but larger problem that I would like to attack, but it is too large to use the same methodology as previously. I’m looking for some help in properly formulating the problem and choosing the best solver. This problem is for an industrial application. If a specialized, commercial solver is required, I may be able to obtain it. Anyway, here is a summary…

The objective function to be maximized, (couched in Julia code) is

f(x) = sum(min.(A*x, bgoal))

Here size(A) == (nb, nc) where 50 < nb < 400, 500 < nc < 12_000, and A consists of only ones and zeros. Each row of A has about 3% ones, with the rest zeros. bgoal is a vector of positive constants. The constraints are

x .>= 0
sum(x) <= C
A*x .>= bmin

where C is a positive constant, and bmin is a vector of positive constants such that bmin .< bmax.

I can handle the nondifferentiability of the min function using a smooth approximation as discussed in my previous post. I’m hoping that someone can help couch this in terms of a standard optimization problem that is faster to solve than a general NLP, and also suggest an appropriate solver. Ideally the solution (even an approximate “good” solution) could be obtained in no more than a few minutes of computation. Thanks in advance.

1 Like

You’re in luck. This should solve in a few seconds or less.

using JuMP, Clp
model = Model(Clp.Optimizer)
@variables(model, begin
    x[1:N] >= 0
    t[m=1:M] <= bgoal[m]
end)
@expression(model, y, A * x)
@constraints(model, begin
    sum(x) <= C
    [m=1:M], y[m] >= bmin[m]
    [m=1:M], y[m] >= t[m]
end)
@objective(model, Max, sum(t))
optimize!(model)

It trick is that if you’re maximizing f, then:

f = min(x, y)

is the same as

f <= x
f <= y
4 Likes

Thanks for looking at this. I have three questions:

  1. Since y[m] <= t[m] and t[m] <= bgoal[m] doesn’t this mean that you’ve introduced an additional constraint A*x .<= bgoal? This constraint is not desired.

  2. Is there any benefit (aside from storage) to making A a Sparse matrix?

  3. Should Clp be preferred over GLPK or other LP solvers?

  1. Oops. It was just a typo. I meant y[m] >= t[m]. Edited the post.
  2. No. JuMP efficiently converts this into its own sparse type.
  3. You could try a range of LP solvers. Clp is good. GLPK can be a little slower. Commercial solvers like CPLEX or Gurobi are best.

:clap: :clap: :clap:
I owe you a beer. A lot of beers!! It works perfectly and is lightning fast. For size(A) = (133, 12067) the GLPK solver takes 0.05 seconds! Thanks!!

3 Likes