Parametric Markdown comments?

I believe there is a better way to do what you have in mind. Here is one option that works fairly well for most julia projects:

Set up a “continuous integration” environment for your project. If you host it on github, that is particularly easy: you just add a configuration file to the repository, github detects its presence, and runs tests or benchmarks as prescribed by the configuration file. These tests and/or benchmarks are executed for every single change to the code (e.g. git push to the repository), for any version of julia you want, for most operating systems you would probably care about, and for most common CPU architectures.

Here is an example of such test config file:

This might seem like a lot of boilerplate however:

  • it is probably less that the custom solution you have in minde
  • it is “standard” so it would be very easy for you to get help from this community and it will continue working in the future
  • it uses the “continuous integration” servers provided by github so you get to test hardware configurations you do not own

You probably would also like to employ the git blame command (also available as a pretty webpage on github for each code file), to see when something was changed in your code.

If you want to go more fanciful, you can even keep records of the benchmark results. For instance, I keep this separate repository, purely of benchmark records, that show how the library has evolved (on different julia versions): QuantumClifford benchmarks. These are generated by a fairly simple plotting script that reads the benchmark logs and uses Makie and DataFrames to make the plots. However, this is just a custom solution, not something widespread as the tools covered above.

3 Likes