I have written a Julia package that is based on an algorithm that is used by a CLI program. I’m trying to perform some benchmarks to see where I stand. I want to eventually publish the results, so I want to make sure I’m not misleading anyone.
For refrence, the CLI program reads a file and spits the results to
The solution I have right now is creating two
BenchmarkGroups with the same name, one running my code and the other benchmarking
read on the CLI program with the appropriate switches. I use
judge to compare the timings.
The problem I’m running into is that the
read beanchmark is very slow when the output is large. The program has an option to turn off output, but a user of the program will need to read the results eventually, so I don’t think that’s a completely fair way to perform benchmarks either.
The other concern is the overhead of running the program from Julia. I’m guessing that’s not a big problem, since the overhead should be small and constant. The benchmarks take at least a couple of seconds to complete.
Is there a standard way of handling benchmarks like these? Is there a repo with this kind of benchmarking I can look at?
The second question I have is if there is reliable a way to benchmark the total memory use of a program or Julia function. The problem I’m solving is memory-intensive, so the total memory used is an important metric.
Thanks for the help!