First alpha release of JuMP 0.19 (JuMP/MathOptInterface)

To help out early adopters in the big transition to JuMP/MathOptInterface, we’ll be making development tags of JuMP master up to the JuMP 0.19 release. The first development release, v0.19-alpha is now tagged. This is the first tag of JuMP/MathOptInterface branch that we’ve been working on for the past year.

See for a list of breaking changes and new features that will turn into the release notes for 0.19. We also list the major known issues that we intend to fix before the final release.

See the docs for a list of solvers that are compatible and not yet compatible with JuMP/MathOptInterface.

To install on Julia 1.0:

] add JuMP#v0.19-alpha

The tag is also compatible with Julia 0.6, but this compatibility may be dropped before the 0.19 release.

Note: alpha means that breaking changes are still underway. We’re asking for interested early adopters to test out the development release and provide feedback. Expect important things to occasionally be broken or not yet implemented. Expect to look at the (now much tidier) source code to understand how something works. Expect that your existing JuMP models will not work without some changes. We’ve put a lot of effort into testing, and there are no known correctness bugs, but we still recommend caution since the code is new. Please don’t use the alpha release to run critical services. :slight_smile:


Great! I’m learning the new grammar.

using JuMP, GLPK
m = Model(with_optimizer(GLPK.Optimizer))
@variable(m, x, Bin)
@variable(m, y[1:10] >= 0)
@objective(m, Min, -x + sum(y[i] for i in 1:10) )
@show JuMP.result_value(x)
@show JuMP.result_value(y[1])
@show JuMP.result_value(y)

The last line produces an error.

julia> @show JuMP.result_value(y)
ERROR: MethodError: no method matching result_value(::Array{VariableRef,1})
Closest candidates are:
  result_value(::VariableRef) at /Users/chkwon/.julia/packages/JuMP/LjMor/src/variables.jl:557
  result_value(::JuMP.GenericAffExpr) at /Users/chkwon/.julia/packages/JuMP/LjMor/src/aff_expr.jl:244
 [1] top-level scope at show.jl:555

Is this a bug?

No, not a bug. We’re following the Julia convention of using broadcast syntax for operations that map every element of an array. Use JuMP.result_value.(y) instead. (



I have another question. What will be the equivalent way of doing status == :Optimal?

I tried

JuMP.termination_status(m) == MathOptInterface.Success

which works, if I install MathOptInterface manually. But it seems that JuMP works without manually installing MOI. Is there a way without MOI?

Anyway, to do status=:Optimal, it seems

JuMP.termination_status(m) == MathOptInterface.Success && JuMP.primal_status(m) == MathOptInterface.FeasiblePoint

Is this a correct and recommended way?

Yes, your second way is correct.

I usually add a const MOI like so:

const MOI = JuMP.MathOptInterface

JuMP.termination_status(model) == MOI.Success

Maybe JuMP should export the MOI symbol.


Thanks! I think it is a wonderful idea to export MOI.

When Clp finds a problem infeasible, it returns Success. I guess it needs to be InfeasibleNoResult.

using JuMP, Clp

solvers = Clp.Optimizer

m = Model(with_optimizer(solver))

@variable(m, x >= 0)
@objective(m, Min, x)
@constraint(m, x <= -1)


term_status = JuMP.termination_status(m)
primal_status = JuMP.primal_status(m)
dual_status = JuMP.dual_status(m)

@show solver
@show term_status
@show primal_status
@show dual_status
Coin0507I Presolve determined that the problem was infeasible with tolerance of 1e-08
Clp3003W Analysis indicates model infeasible or unbounded
Clp0006I 0  Obj 0 Primal inf 0.9999999 (1)
Clp0006I 0  Obj 0 Primal inf 0.9999999 (1)
Clp0001I Primal infeasible - objective value 0
Clp0032I PrimalInfeasible objective 0 - 0 iterations time 0.002
solver = Clp.Optimizer
term_status = Success::TerminationStatusCode = 0
primal_status = UnknownResultStatus::ResultStatusCode = 8
dual_status = InfeasibilityCertificate::ResultStatusCode = 4

Shouldn’t term_status be InfeasibleNoResult?

The dual_status shows InfeasibilityCertificate, thus Clp has found a Farkas proof of primal infeasibility.

If Clp was unable to find a certificate of primal infeasibility, it should return InfeasibleNoResult.

Some solvers (e.g. nonlinear ones) may return a termination status Success and a primal status InfeasiblePoint.

This is a good example of how the MOI status reporting system allows a much richer description of why the solver terminated.

Question: What is the way to access not implemented API functions?.. before I would do a ccall to m.internalModel.inner.ptr.

See the RawSolver attribute, e.g., MOI.get(model, MOI.RawSolver()). The plumbing might not be connected for your favorite solver though.

MOI.get(model, MOI.RawSolver()) gives me an error:

ERROR: OptimizeNotCalled()

Is there a way to access the internal model before optimizing?

Solution Found: (disregard previous message)
I used MOI.get(model.moi_backend.optimizer, MOI.RawSolver()) and it worked

Edit: The returned internal model is empty both for Gurobi and CPLEX before the model is solved. Something off with the plumbing?

Edit to above post: Found how to get the CPLEX.Model to be synchronized with the JuMP model:


With this, you can now access the model variables, constraints, etc in the internal model.

That’s correct. I would also recommend using direct mode if you intend to map variables between the underlying solver and the JuMP model.

I’m closing this thread because the discussion has veered away from the 0.19 alpha release. Please open a new topic if you have additional questions on accessing the internal model.