I thought about collecting Julia code snippets and sort and assemble them into a public git repository.
Does something like this already exist?
I’m happy to start something, but before I do so, ideas would be very welcome.
The trickiest point is how to make sure it is easy to re-run all benchmarks without strict rules on the benchmarks themself.
Initial ideas:
- Benchmarks are sorted in folders
- Each folder can have its own Project.toml
- Optional: Output via Home · Documenter.jl or https://github.com/JunoLab/Weave.jl
- but pure code should also be accepted since that can be collected directly from this forum etc.
- One global script which runs all benchmarks and generates a webpage.
- Some mild guidelines to avoid negative benchmark wars (e.g., no comparison with other languages )
Later:
- More powerful scripts to run on different Julia versions and differentiate outputs from different machine types. (small laptop vs powerful server etc)
Why?
- Making code run faster is one of the most popular categories here.
- Could save time for beginners and experts
- Old benchmarks conclusions are often challenged by new Julia versions, re-running benchmarks with newer Julia versions can give new insights.