ANN: Convex 0.13 released

This is a breaking release to Convex.jl which switches the backend from the deprecated MathProgBase to MathOptInterface (usually shortened to MOI). This means that one should use MOI-compatible solvers. For example, if you previously solved a problem p with

using Convex, ECOS
p = …
solve!(p, ECOSSolver())

then you should change the solve! command to

solve!(p, ECOS.Optimizer)

To pass argument to the optimizer, create a closure, e.g.

solve!(p, () -> ECOS.Optimizer(verbose=0))

There are a few more smaller changes listed in the NEWS file.

This has change has a few benefits.

Performance

First, MOI can be much more performant than MPB (see e.g. v0.13 now as fast as R/CVXR, also scales linearly (v0.12 was quadratic) · Issue #353). Note that compile times seem to be longer via MOI than they were via MPB, however.

Arbitrary numeric types

Second, MOI supports arbitrary numeric types. This opens the door to solving optimization problems in high precision, for example. There are only a few high-precision solvers, but now you can use them easily. There are three important steps to solving a problem in high-precision.

  1. First, formulate your problem in high-precision. For example, instead of x = sqrt(2) * y, write x = sqrt(big(2)) * y.
  2. Second, pass the keyword argument numeric_type to the problem construction call. For example,
    p = minimize(objective; numeric_type = BigFloat)	
    
    The numeric_type argument tells Convex.jl what type of number to use when reformulating the problem internally, and what type of number it should tell MathOptInterface to use. Note that if you use complex numbers, you should still pass the real version to numeric_type.
  3. Then call the solve! command with a high-precision solver. The two that I know about are Tulip.jl, a pure Julia linear program solver, and SDPAFamily.jl which calls the C++ solvers SDPA-GMP, SDPA-DD, and SDPA-QD which are for solving semi-definite optimization problems (SDPs). For example, SDPAFamily provides the optimizer SDPAFamily.Optimizer which expects BigFloats. To use another numeric type T, you can do SDPAFamily.Optimizer{T} . Tulip provides Tulip.Optimizer which defaults to Float64, but you can pass e.g. Tulip.Optimizer{BigFloat}.

We can see a simple example of all of this to solve a trivial SDP with SDPAFamily:

using Convex, SDPAFamily, Test
y = Semidefinite(3)
p = maximize(eigmin(y), tr(y) <= sqrt(big(2)); numeric_type = BigFloat)
solve!(p, () -> SDPAFamily.Optimizer(presolve=true))
@test p.optval ≈ sqrt(big(2))/3 atol=1e-30

Development

The last benefit is that by switching the backend to MOI, Convex.jl is positioned better for future development. We can lower the user’s code more directly to MOI constructs for improved performance, and we can use MOI’s bridging mechanism instead of atoms in some cases. MOI has a lot of strong development behind it, and Convex can benefit from that!

14 Likes

One relatively new feature of Convex which I haven’t mentioned was introduced in 0.12.6: the “problem depot”. Convex.jl has a collection of optimization problems that it uses for end-to-end tests; the problem depot provides a way to easily use these test problems outside of Convex.jl’s own tests. These problems provide a means to check that Convex.jl is correctly modelling problems (which was their original purpose), but they also can be used to test that solvers are correctly solving them.

Since Convex.jl supports MathOptInterface (MOI), Convex 0.13 makes this suite of problems available for any MOI-compatible solver. I have setup a repository to test each open source (and non-matlab-based) MOI-compatible solver I could find: GitHub - ericphanson/ConvexTests.jl: Uses the Convex.jl Problem Depot to test various optimization solvers. This uses Github Actions to run all the tests in parallel and prints the results, along with timing information, in a readable manner in the documentation pages. Commerical and matlab-based solvers are only excluded because I don’t know how to run them on continuous-integration services, or if that’s possible.

The problem depot, and earlier forms thereof, was used for testing our wrapper of SDPA-GMP and its variants (SDPAFamily.jl), leading to the discovery of a quickly-fixed MOI correctness bug (Bug in SOCtoPSDBridge · Issue #838 · jump-dev/MathOptInterface.jl · GitHub; the only correctness bug on the tracker-- which itself is a testament to the high quality development of MOI), as well as the development of a presolve routine for SDPA-GMP, which allows the solver to pass 36 tests from the problem depot that it otherwise fails, although unfortunately at the cost of long presolve computation time for 3 problems it could already solve-- I am hoping to look into this at some point and speed up the presolve routine.

I think therefore the problem depot has already been a useful resource, and I hope the tooling in ConvexTests.jl makes it easily available to a wider audience. I hope also it can serve as a bit of a call-to-action, because many solvers fail at least some tests in the problem depot.

4 Likes

New non-breaking release, Convex v0.13.4! See the news file for the changes. I’ll just highlight one nice feature which is that you can add constraints directly to variables via add_constraint!. For example, let us say I am interested in optimizing functions of probability vectors (i.e. vectors with non-negative entries that add up to one). We could create a function

using Convex
function probability_vector(d)
    x = Variable(d)
    add_constraint!(x, x >= 0)
    add_constraint!(x, sum(x) == 1)
    return x
end

and then use it as e.g.

using COSMO, Test
p = probability_vector(3)
q = [1, 0, 0]
problem = maximize(entropy(p), [norm(p-q, 1) <= 0.2])
solve!(problem, COSMO.Optimizer)
@test evaluate(p) ≈ [0.9, 0.05, 0.05] atol=1e-4   # Test Passed

I also have a lightning talk at JuliaCon on Wednesday that discusses Convex.jl which could be of interest.

7 Likes