In the last few days I have been working in eliminating all unnecessary allocations in a package, found some type-instabilities, etc, and everything is much better now.
I created, then, a set of tests to evaluate if, in future changes, I will not be introducing memory allocations by mistake. Therefore, I ran (twice) each of the critical functions of the package, measured the allocations with
@timed and added in the
runtests.jl some tests like:
x = run_function(data) t = @timed x = run_function(data) @test t.bytes = 144
144 is, of course, the number of allocations that I expect for that function.
The idea is to be sure, in future releases, that no extra allocation was introduced by mistake.
Everything runs almost fine, except that the CI runs fails for a few tests in other OS (Windows, in particular), because the number of allocations is slightly different from what I have measured here (Linux).
Is there a smarter, safer, more consistent way to do this?
Also, I noticed that some functions which do not allocate anything in real-world executions allocate a few bytes inside the
@testset. Is there any clear reason for that?