I have been slowly adding features and documentation to Nonconvex.jl
and the JuliaNonconvex
organization at large and it’s time to make a new announcement. Here are some updates with the new version.
-
The README and docs are now updated to include a summary table of the algorithms available.
-
The documentation was significantly improved, especially the “Gradients, Jacobians and Hessians” and “Algorithms” sections.
-
Custom gradients and hessians got easier.
-
You can use symbolic differentiation for specific functions or the entire model.
-
You can use alternative AD backends for specific functions or the entire model.
-
You can use sparse forward mode AD for specific functions or the entire model.
-
You can use ChainRules frule with ForwardDiff.
-
You can easily define implicit functions to make use of the implicit function theorem.
-
You can track the history of specific functions and their gradients.
-
NonconvexIpopt supports symbolic and/or sparse first and second order differentiation of the Lagrangian function. The same is true for
NonconvexPavito
andNonconvexJuniper
which useNonconvexIpopt
as a sub-solver. -
Metaheuristics.jl is now wrapped and documented.
-
NOMAD.jl is now wrapped and documented, allowing the customization of the algorithm to the model where some constraints can be declared explicit and others declared progressive.
-
The TOBS algorithm (a nonlinear, binary, heuristic optimization algorithm from the field of topology optimization) has been implemented by @LucasMSpereira and documented.
Happy optimization!