User defined objective function to run MICP

jump

#1

I have the following objective function (w is the variable vector of size nx):

function obj_trace(w::Array{Float64,1})
    dg = Diagonal(w)
    M = F' * (gamma \ dg) * F + eye(nx)
    obj = trace(inv(M))
    return obj
end 

where gamma is diagonal (F, nx, gamma are given), and the following constraints:
(1) sum(w) <= c (a known integer);
(2) each element in w is binary (either 0 or 1).

I have tried Pajarito package in Julia using a different formulation: add SDP constraints and no user-defined function. The algorithm (Mosek + CPLEX) does not seem to stop even for small n. Is there a better (faster) way to solve this MICP with user-defined function (ideally i would like to use Ipopt as cont-solver, and provide user-defined gradient and hessian as well)?

Any suggestion is appreciated!


#2

Hi! Pajarito only accepts conic models - see the conic section of the mathprogbase documentation. Pavito is another MICP solver that only accepts NLP models - see the nonlinear programming section of mathprogbase docs. If you need to have SDP constraints, then you should use Pajarito, which will require that you rewrite any NLP constraints in conic form. Convex.jl is a package that does this automatically, though it currently may emit many deprecation warnings and you may need to use the branch in the open PR for the 0.7 update (if you use Julia 0.7+).


#3

Thank you for the suggestion and information about Pavito! I do not have to use SDP constraints, and initially my objective is defined as a function and I have also computed its gradient and hessian, so i would like to use Ipopt to solve the relaxed continuous problem with user-defined (func, grad, hess). However, I am not sure how to write it in Julia with a MICP solver.


#4

Pavito is able to use Ipopt for continuous relaxations and take user-defined functions through JuMP. However, JuMP does not support providing hessians for user-defined functions (https://github.com/JuliaOpt/JuMP.jl/issues/1198). You could also call Bonmin through its C++ API.


#5

Thank you for suggesting Bonmin in C++, but I would like to try that later if this does not work in Julia. Indeed Pavito can use IpoptSolver as cont-solver and I have tried it as follows, but Julia returned an error.

using JuMP, Pavito
nx = 100
c = 10
F = rand(nx, nx)

function my_obj(w...)
    M = eye(nx)
    for i = 1 : nx
        M += w[i] * F[:, i]' * F[:, i]
    end
    return trace(inv(M))
end

mip_solver_drives = true
rel_gap = 1e-5

using CPLEX
mip_solver = CplexSolver(
    CPX_PARAM_SCRIND = (mip_solver_drives ? 1 : 0),
    CPX_PARAM_EPINT = 1e-8,
    CPX_PARAM_EPRHS = 1e-7,
    CPX_PARAM_EPGAP=(mip_solver_drives ? 1e-5 : 1e-9)
)

using Ipopt
cont_solver = IpoptSolver(print_level = 0)

solver = PavitoSolver(
    mip_solver_drives = mip_solver_drives,
    log_level = 1,
    rel_gap = rel_gap,
    mip_solver = mip_solver,
    cont_solver = cont_solver,
)

model = Model(solver = solver)
JuMP.register(model, :my_obj, nx, my_obj, autodiff = true)
w = @variable(model, [j = 1 : nx], Bin, lowerbound = 0, upperbound = 1)
@constraint(model, sum(w) <= c)
@NLobjective(model, Min, my_obj(w))

Error message: LoadError: Incorrect number of arguments for “my_obj” in nonlinear expression. This is also related to a previous question: https://discourse.julialang.org/t/passing-an-array-of-variables-to-a-user-defined-non-linear-function/4132.

Currently I use auto-diff in the code, but I will provide gradient after the code can run for the simpler case. Could you give some advice on how I should define the function ?


#6

#7

Thank you very much for the comment, and I have edited my code.


#8

I can’t run your code because F and nx are not defined. When posting code, try to make it as small as possible and totally self-contained. Here is a minimal working example that demonstrates the same problem:

using JuMP
model = Model()
f(x, y) = x^y
JuMP.register(model, :f, 2, f, autodiff=true)
@variable(model, x[1:2] >= 0)
@NLobjective(model, Min, f(x...))  # Error: incorrect number of arguments

If you follow the advice at the end of the link you linked to, you can set the objective as:

JuMP.setNLobjective(model, :Min, Expr(:call, :f, x...))

Hope that helps!


#9

That helps indeed! However, it was not solved properly:

using JuMP, Pavito
nx = 10
c = 2
F = rand(nx, nx)

function my_obj(w...)
    M = eye(nx)
    for i = 1 : nx
        M += w[i] * F[:, i]' * F[:, i]
    end
    return trace(inv(M))
end

mip_solver_drives = true
rel_gap = 1e-5

using CPLEX
mip_solver = CplexSolver(
    CPX_PARAM_SCRIND = (mip_solver_drives ? 1 : 0),
    CPX_PARAM_EPINT = 1e-8,
    CPX_PARAM_EPRHS = 1e-7,
    CPX_PARAM_EPGAP=(mip_solver_drives ? 1e-5 : 1e-9)
)

using Ipopt
cont_solver = IpoptSolver(print_level = 0)

solver = PavitoSolver(
    mip_solver_drives = mip_solver_drives,
    log_level = 1,
    rel_gap = rel_gap,
    mip_solver = mip_solver,
    cont_solver = cont_solver,
)

model = Model(solver = solver)
JuMP.register(model, :my_obj, nx, my_obj, autodiff = true)
w = @variable(model, [j = 1 : nx], Bin, lowerbound = 0, upperbound = 1)
@constraint(model, sum(w) <= c)
# @NLobjective(model, Min, my_obj(w))
@eval @JuMP.NLobjective(model, Min, $(Expr(:call, :my_obj, [Expr(:ref, :w, i) for i = 1 : nx]...)))
println("solution\n$(getvalue(w))\n")

It gave me the solution of all NaNs with a warning: “Variable value not defined for component of anon. Check that the model was properly solved.” The current Julia version is 0.6. Is there anything I am missing here?


#10

Did you try:

JuMP.setNLobjective(model, :Min, Expr(:call, :my_obj, w...))

#11

yes i just tried, and this time it returned NaNs without warnings…