I am a little unsure about the difference between these packages. How to know whether I should use optimization.jl or convex.jl? I am pretty used to disciplined convex programming (convex.jl covers it nicely), but Idk when optimization.jl can become a better tool. PS: I am not an optimizer expert, so please, have mercy.
Optimization.jl can be used to solve constrained nonlinear programs, but it doesn’t support conic solvers like SCS or Mosek, and it doesn’t support things like semidefinite programs.
If you want to stay in the DCP world, then use Convex.jl. If you’re looking to branch out into different types of problems, the JuMP might be a good choice. It doesn’t support DCP, but it does support conic solvers, as well as linear, mixed-integer, and nonlinear.
I just see that JuMP does include Convex.jl. Therefore, it solves DCP problems, doesn’t? If so, can I focus only on JuMP.jl and ignore Convex.jl and optimization.jl as it covers both?
JuMP does not “include” Convex.jl.
I will say that I have moved to JuMP (with HiGHS / Mosek / Gurobi solvers) for my convex programming because the “maintenance” warning on Convex.jl scared me about future support…
I just said that because my link (JuMP homepage) states that “The JuMP ecosystem includes Convex.jl, an algebraic modeling language for convex optimization based on the concept of Disciplined Convex Programming.”
“maintenance” warning on Convex.jl scared me about future support
That is sad, I need a DCP-complaint Julia package…
Convex.jl is part of the JuMP ecosystem but it is not compatible with the JuMP.jl modeling library
We do have plans for DCP in Optimization.jl, and some potential improvements to the algorithms of DCP. But it’ll take some infrastructure to get there, so I wouldn’t expect it any earlier than 2 years from now. This is one of the projects that FastDifferentiation.jl + JuliaSimCompiler.jl + E-graphs + array symbolics + … is aiming for, but the build up for the algorithms we want to do (to do it correctly) is like 5 other projects , but it’s in our sights. Hopefully we get there before Convex.jl is full maintenance mode.
Interesting. But DCP seems somewhat tangential to the current design of Optimization.jl? Or are you thinking of an interface that lowers generic Julia code into a conic formulation in MOI?
Hopefully we get there before Convex.jl is full maintenance mode.
Convex.jl is full maintenance mode, and has been for some time. @ericphanson is working on a MOI interface, More MOIified implementation, again by ericphanson · Pull Request #504 · jump-dev/Convex.jl · GitHub, but he doesn’t plan to actively work on maintaining Convex.jl.
I think its fairer to say that Convex.jl lives inside the jump-dev GitHub organization, but none of the core JuMP developers are actively working on or maintaining Convex.jl.
Optimization.jl already interfaces with the LP, QP, etc. stuff of MOI when it can detect certain forms. tl;dr is that DCP would allow us to interface with the convex optimization parts as well.
But the longer story. We plan to just keep making that better and better with more an more symbolic compiler analyses on the code and direct integration with tools like JuliaSimCompiler (i.e. the ModelingToolkit backend with better scaling to large functions). The goal of all of this is that someone can just write a standard Julia function and then the passes realize that it’s actually a QP and “cheat”, so it’s like SciPy.optimize but we want to make use of structure whenever we can detect it. That was the topic of my JuliaCon keynote, how we’re integrating algorithms that are blurring the line between symbolics and compilers with symbolic-numeric algorithms and solver libraries that have special hooks.
In that sense, DCP is the clear next domain for this whole setup. That and the fact that we’re doing a lot more convex optimization now that SciML and JuliaSim have grown a large controls userbase (especially with the Boeing investment into JuliaHub), which means that good integration of convex optimization with ModelingToolkit is a must. We still wouldn’t pull the trigger unless there was a clear algorithmic advantage over the Boyd lab’s work, which we do have some ideas so that makes it also a good Julia Lab research topic so the stars are aligning.
Of course, this is going to take some time. We’ve already discussed some directions, but the steps will be:
- Demonstrate improved nonlinear optimization ModelingToolkit/Symbolics codegen via JuliaSimCompiler. The first round of this should be done by October and it will be exciting to finally see the benchmarks.
- Setup specialized forms in Optimization.jl for allowing someone to define structured optimization problems more directly (right now the MOI structured optimization stuff can only be accessed through a ModelingToolkit analysis). This would include the ability to directly define the functions for a convex optimization.
- ModelingToolkit passes for translating problems into convex optimizations (i.e. DCP).
ModelingToolkit is a whole system for transforming mathematical systems to other systems. Transforming DAEs to ODEs, transforming PDEs to optimization problems, transforming SDEs to PDEs, transforming ODEs to ODEs in ways that reduce the equations or perform other simplifications. It then does codegen to SciML numerical libraries (DifferentialEquations, NonlinearSolve, Optimization, etc.) based on the resulting simplified/transformed form. DCP is just an Optimization → Optimization pass in this framework and actually looks very similar to some other passes in some sense, so it’s a clear next frontier.
That said, the optimization part of all of this infrastructure is still a bit underwhelming. We have some unique bits that aren’t in other places, like nonlinear tearing on equality constraints, but until we get array-specialized codegen (the next steps in JuliaSimCompiler) it’s still a bit meh. But once that’s done, then good integrations with differential equation solvers for collocation methods is the next clear opportunity (for controls), which then gives convex optimization problems in general, so then going down that route is really where it has to go.