I am excited to announce my new package, NonlinearOptimizationTestFunctionsInJulia.jl. It is designed to be a comprehensive collection of test functions for benchmarking nonlinear optimization algorithms. Each function comes with an analytical gradient and detailed metadata to ensure efficient and reliable testing.
Key Features:
Standardized Test Functions: Includes well-known functions like Rosenbrock, Ackley, and Branin, all with consistent interfaces.
Analytical Gradients: Accelerates testing and benchmarking processes.
Detailed Metadata: Each function is supplied with information on its starting points, global minima, and properties (e.g., unimodal, multimodal).
Compatibility: Seamless integration with popular optimization packages such as Optim.jl, NLopt.jl, and tools like ForwardDiff.jl.
This package is a tool for anyone looking to develop, test, or compare optimization algorithms.
Thanks for the great feedback! Youāre right that our project shares vibes with CUTEst.jl and OptimizationProblems.jl, but weāre carving out a niche with a pure-Julia, lightweight suite starting with Molga & Smutnicki (2005) functions like Cross-in-Tray. Weāre excited to expand with AMPGO and BBOB functions, like Alpine, to test algorithms across dimensions. Unlike CUTEst.jl, we avoid Fortran dependencies for stability.
I am new to GitHub, and I am not really experienced with Julia, so I thought itās better to start something for myself first. Try to create Problems without making problems^^. The plan is to learn enough to be able to support other projects in the future
Iām excited to share an update on NonlinearOptimizationTestFunctions.jl!
I have now implemented over 200 test functions, including the full sets from Jamil & Yang (2013) and Molga, Marcin, and Smutnicki (2005). All functions ā and their analytical gradients where available ā can now be evaluated in high precision, enabling reliable benchmarking for both standard and arbitrary-precision optimization workflows.
In addition, 100+ more unconstrained test functions are already in the pipeline. Iāve also completed the initial groundwork for incorporating constrained optimization problems, which will significantly expand the packageās applicability in future releases.
More updates soon ā and as always, contributions, feedback, and ideas are welcome!
Iām currently polishing up the documentation and experimenting with a few more demos to make the package even more user-friendly. One feature Iām particularly excited about is the with_box_constraints wrapper, which turns bounded test functions into unconstrained ones using a domain-safe modified exact L1 penalty method.
Unlike traditional L1 penalties that might evaluate the objective or gradient outside bounds (risking domain errors or instability), this wrapper clamps points to the feasible region before evaluation while adding a penalty term (with fixed Ļ=10^6) to pull violations inward. This ensures safe, stable runsāespecially for benchmarks where bounds define the valid domaināand works seamlessly with any unconstrained optimizer (e.g., LBFGS, BFGS, etc.), not just specific ones like Optim.jlās Fminbox.
A quick benchmark across all 155 bounded functions (10 random feasible starts each) shows itās often more robust and efficient than Fminbox: higher success rates on tricky cases (e.g., non-differentiable or ill-conditioned functions) and fewer calls when both converge. Check out the updated docs for usage examples and the full comparison
Feedback welcomeālet me know if youād like more details or to try it out!
SHGO.jl: Global Optimization for Test Function Analysis
Quick update on NonlinearOptimizationTestFunctions.jl: Iām building SHGO.jl (Simplicial Homology Global Optimization) to automatically characterize multimodal test functions.
Status
Working prototype finds all local minima + basin count:
using SHGO, NonlinearOptimizationTestFunctions
tf = fixed(TEST_FUNCTIONS["sixhumpcamelback"]; n=2)
result = analyze(tf; n_div=15)
println("$(result.num_basins) basins, global min: $(get_global_value(result))")
# Output: 6 basins, global min: -1.031628
This week Iāll validate against scipy.optimize.shgo to check correctness and performance. Goal is to add multimodality metrics to the test function collection automatically.