User defined objective function to run MICP

I have the following objective function (w is the variable vector of size nx):

function obj_trace(w::Array{Float64,1})
    dg = Diagonal(w)
    M = F' * (gamma \ dg) * F + eye(nx)
    obj = trace(inv(M))
    return obj
end 

where gamma is diagonal (F, nx, gamma are given), and the following constraints:
(1) sum(w) <= c (a known integer);
(2) each element in w is binary (either 0 or 1).

I have tried Pajarito package in Julia using a different formulation: add SDP constraints and no user-defined function. The algorithm (Mosek + CPLEX) does not seem to stop even for small n. Is there a better (faster) way to solve this MICP with user-defined function (ideally i would like to use Ipopt as cont-solver, and provide user-defined gradient and hessian as well)?

Any suggestion is appreciated!

Hi! Pajarito only accepts conic models - see the conic section of the mathprogbase documentation. Pavito is another MICP solver that only accepts NLP models - see the nonlinear programming section of mathprogbase docs. If you need to have SDP constraints, then you should use Pajarito, which will require that you rewrite any NLP constraints in conic form. Convex.jl is a package that does this automatically, though it currently may emit many deprecation warnings and you may need to use the branch in the open PR for the 0.7 update (if you use Julia 0.7+).

Thank you for the suggestion and information about Pavito! I do not have to use SDP constraints, and initially my objective is defined as a function and I have also computed its gradient and hessian, so i would like to use Ipopt to solve the relaxed continuous problem with user-defined (func, grad, hess). However, I am not sure how to write it in Julia with a MICP solver.

Pavito is able to use Ipopt for continuous relaxations and take user-defined functions through JuMP. However, JuMP does not support providing hessians for user-defined functions (Feature Request: Second Derivatives for User Defined Functions · Issue #1198 · jump-dev/JuMP.jl · GitHub). You could also call Bonmin through its C++ API.

Thank you for suggesting Bonmin in C++, but I would like to try that later if this does not work in Julia. Indeed Pavito can use IpoptSolver as cont-solver and I have tried it as follows, but Julia returned an error.

using JuMP, Pavito
nx = 100
c = 10
F = rand(nx, nx)

function my_obj(w...)
    M = eye(nx)
    for i = 1 : nx
        M += w[i] * F[:, i]' * F[:, i]
    end
    return trace(inv(M))
end

mip_solver_drives = true
rel_gap = 1e-5

using CPLEX
mip_solver = CplexSolver(
    CPX_PARAM_SCRIND = (mip_solver_drives ? 1 : 0),
    CPX_PARAM_EPINT = 1e-8,
    CPX_PARAM_EPRHS = 1e-7,
    CPX_PARAM_EPGAP=(mip_solver_drives ? 1e-5 : 1e-9)
)

using Ipopt
cont_solver = IpoptSolver(print_level = 0)

solver = PavitoSolver(
    mip_solver_drives = mip_solver_drives,
    log_level = 1,
    rel_gap = rel_gap,
    mip_solver = mip_solver,
    cont_solver = cont_solver,
)

model = Model(solver = solver)
JuMP.register(model, :my_obj, nx, my_obj, autodiff = true)
w = @variable(model, [j = 1 : nx], Bin, lowerbound = 0, upperbound = 1)
@constraint(model, sum(w) <= c)
@NLobjective(model, Min, my_obj(w))

Error message: LoadError: Incorrect number of arguments for “my_obj” in nonlinear expression. This is also related to a previous question: https://discourse.julialang.org/t/passing-an-array-of-variables-to-a-user-defined-non-linear-function/4132.

Currently I use auto-diff in the code, but I will provide gradient after the code can run for the simpler case. Could you give some advice on how I should define the function ?

Thank you very much for the comment, and I have edited my code.

I can’t run your code because F and nx are not defined. When posting code, try to make it as small as possible and totally self-contained. Here is a minimal working example that demonstrates the same problem:

using JuMP
model = Model()
f(x, y) = x^y
JuMP.register(model, :f, 2, f, autodiff=true)
@variable(model, x[1:2] >= 0)
@NLobjective(model, Min, f(x...))  # Error: incorrect number of arguments

If you follow the advice at the end of the link you linked to, you can set the objective as:

JuMP.setNLobjective(model, :Min, Expr(:call, :f, x...))

Hope that helps!

1 Like

That helps indeed! However, it was not solved properly:

using JuMP, Pavito
nx = 10
c = 2
F = rand(nx, nx)

function my_obj(w...)
    M = eye(nx)
    for i = 1 : nx
        M += w[i] * F[:, i]' * F[:, i]
    end
    return trace(inv(M))
end

mip_solver_drives = true
rel_gap = 1e-5

using CPLEX
mip_solver = CplexSolver(
    CPX_PARAM_SCRIND = (mip_solver_drives ? 1 : 0),
    CPX_PARAM_EPINT = 1e-8,
    CPX_PARAM_EPRHS = 1e-7,
    CPX_PARAM_EPGAP=(mip_solver_drives ? 1e-5 : 1e-9)
)

using Ipopt
cont_solver = IpoptSolver(print_level = 0)

solver = PavitoSolver(
    mip_solver_drives = mip_solver_drives,
    log_level = 1,
    rel_gap = rel_gap,
    mip_solver = mip_solver,
    cont_solver = cont_solver,
)

model = Model(solver = solver)
JuMP.register(model, :my_obj, nx, my_obj, autodiff = true)
w = @variable(model, [j = 1 : nx], Bin, lowerbound = 0, upperbound = 1)
@constraint(model, sum(w) <= c)
# @NLobjective(model, Min, my_obj(w))
@eval @JuMP.NLobjective(model, Min, $(Expr(:call, :my_obj, [Expr(:ref, :w, i) for i = 1 : nx]...)))
println("solution\n$(getvalue(w))\n")

It gave me the solution of all NaNs with a warning: “Variable value not defined for component of anon. Check that the model was properly solved.” The current Julia version is 0.6. Is there anything I am missing here?

Did you try:

JuMP.setNLobjective(model, :Min, Expr(:call, :my_obj, w...))

yes i just tried, and this time it returned NaNs without warnings…

I’m not an expert on julia, so it’s just a hypothesis. Did you set the start values for the variable vector? If not, since loweround = 0, the initial value may be a zero vector. So then the obj_trace function may have division by 0, and hence produce NaNs.

Thank you for point it out. Right, i have not set the starting point. But even if the initial value is zero vector, the M matrix is identity plus a semidefinite matrix, so it’s always invertible, and would not return NaNs.

Ah yes, sorry! Must be another reason.
When you used Pavito, did you get the “Unsupported feature Hess” error? It’s just that JuMP doesn’t support Hessians for user-defined vector functions. That’s the error I was getting with Pavito and Ipopt + CbcSolver. Might be MILP solver-dependent though.

Yes, I got exactly the same error “Unsupported feature Hess”, but with Ipopt + CPLEX.

See Disable Hessians when not provided by JuMP by spockoyno · Pull Request #12 · jump-dev/Pavito.jl · GitHub

Pavito just needs a new release.

New version tagged. Please let us know if there are still any issues.

https://github.com/JuliaLang/METADATA.jl/pull/19819

The code can run without error now. Nice! Hope the next step is to allow for user-defined hessian. Please let me know if there is a way to pass user defined (val, grad, hess) to Pavito.

There is no way to do so through JuMP (Feature Request: Second Derivatives for User Defined Functions · Issue #1198 · jump-dev/JuMP.jl · GitHub). You could call Pavito through MathProgBase which would let you implement the derivative callbacks for the complete model.