I thought about collecting Julia code snippets and sort and assemble them into a public git repository.
Does something like this already exist?

I’m happy to start something, but before I do so, ideas would be very welcome.
The trickiest point is how to make sure it is easy to re-run all benchmarks without strict rules on the benchmarks themself.

Initial ideas:


  • More powerful scripts to run on different Julia versions and differentiate outputs from different machine types. (small laptop vs powerful server etc)


  • Making code run faster is one of the most popular categories here.
  • Could save time for beginners and experts :slight_smile:
  • Old benchmarks conclusions are often challenged by new Julia versions, re-running benchmarks with newer Julia versions can give new insights.

There is this GitHub - JuliaLang/Microbenchmarks: Micro benchmark comparison of Julia against other languages , although I’m not sure how up to date it is, lot’s of packages also have their own benchmarks. But your idea sounds cool.

1 Like