Add testing for memory allocations

In the last few days I have been working in eliminating all unnecessary allocations in a package, found some type-instabilities, etc, and everything is much better now.

I created, then, a set of tests to evaluate if, in future changes, I will not be introducing memory allocations by mistake. Therefore, I ran (twice) each of the critical functions of the package, measured the allocations with @timed and added in the runtests.jl some tests like:

x = run_function(data)
t = @timed x = run_function(data)
@test t.bytes = 144

where 144 is, of course, the number of allocations that I expect for that function.

The idea is to be sure, in future releases, that no extra allocation was introduced by mistake.

Everything runs almost fine, except that the CI runs fails for a few tests in other OS (Windows, in particular), because the number of allocations is slightly different from what I have measured here (Linux).

Is there a smarter, safer, more consistent way to do this?

Also, I noticed that some functions which do not allocate anything in real-world executions allocate a few bytes inside the @testset. Is there any clear reason for that?

5 Likes

While I too am interested in your question, I do wonder if maybe testing that allocations are below some threshold is more reasonable?

It seems to me that having zero allocations is a desirable goal many times. When, on the other side, there are allocations, I am not sure if there are OS differences that might result in slightly different results. In the CI test that fails in my package, the Windows version allocates 4 bytes more than in the other OSes (on ~60 thousand).

At the same time, things like this occur:

julia> function mysum(x)
         s = 0.
         for i in eachindex(x)
           s += x[i]
         end
         s
       end
mysum (generic function with 1 method)

julia> x = rand(1000);

julia> mysum(x)
496.3080844783625

julia> @timed mysum(x)
(value = 496.3080844783625, time = 4.538e-6, bytes = 16, gctime = 0.0, gcstats = Base.GC_Diff(16, 0, 0, 1, 0, 0, 0, 0, 0))

julia> f(x) = @timed mysum(x) 
f (generic function with 1 method)

julia> f(x)
(value = 496.3080844783625, time = 1.924e-6, bytes = 0, gctime = 0.0, gcstats = Base.GC_Diff(0, 0, 0, 0, 0, 0, 0, 0, 0))



Note that @timed accuses 16 bytes of allocations when run on the global scope, and zero inside a function. If inside a @testset, it does not allocate (as expected):

julia> @testset "Start" begin
          x = rand(1000);
          mysum(x)
          t = @timed mysum(x)
          @test t.bytes == 16
       end
Start: Test Failed at REPL[7]:5
  Expression: t.bytes == 16
   Evaluated: 0 == 16

I am trying playing with these things yet, just remarking that care must be taken in producing these tests. And I do not know yet if different OSes or other conditions can result in different results for reasons I do not know. There seem to be some tricky things, in particular when the functions do allocate memory.

If you’re at top level then Julia may create a box for the return value, but that’s not necessary if the value never “escapes” compiled code.

6 Likes

If it is only 4 bytes more then you could just change the tolerance in your test to pick this up. I think.

Yes, except that I have some expectative that this result should be deterministic. That it is not, at least with the tests as I am doing them, tells me, I think, that I should be doing something different. But I have to figure out what is going on, perhaps. I was hoping that there was a standard and recommended way to do this.

Doing it inside a @testset is pretty standard and avoids the allocation for the returned value. It’s also pretty common to use @allocated for this instead of extracting the bytes field from @timed. The platform differences are harder to give advice about because they are presumably dependent on the particular function you’re testing.

3 Likes

Thanks, @allocated is more reasonable indeed.

The function that fails on windows uses FortranFiles, which is a parser for binary files written by Fortran, probably there is where something tricky concerning the file system is involved.

I have, on the other side, one example in which when I run the function at the top level it does not allocate anything, but it does (144 bytes) inside the test set. I do not understand that. If I manage to strip a MWE (and still do not understand what is going on), I will post it here. Any hint is appreciated anyway.

Reviving this topic (instead of opening a new one): I find @allocated impractical as I suspect it involves codegen upon the first invocation. BenchmarkTools.@ballocated works better for me. Eg on 1.7,

using Test, BenchmarkTools, StaticArrays

@testset "demo" begin
    f(x) = x .+ 1
    x0 = SVector(1.0, 2.0)
    @test @ballocated(f($x0)) == 0
    @test @allocated(f(x0)) == 0 # FAILS
end

I am wondering if I am misusing @allocated though, any advice is appreciated.

3 Likes
f(x) = x .+ 1

@testset "demo" begin
    x0 = SVector(1.0, 2.0)
    @test @allocated(f(x0)) == 0 
end

works for me at least.

3 Likes

Indeed, it fails for me here as well. Yesterday I was tracking some allocations in a package and I had to quit using @allocated because I got inconsistent results, unfortunately hard to reproduce now. For the records, the workaround was to use @ballocated, which, since BenchmarkTools is not one of the package dependencies, had to be used with something like:

julia> using BenchmarkTools

julia> module MyPackage
         export f
         function f(x,A)
         @show Main.@ballocated let A = $A, x = $x
           z = A \ x  # block to be tested
         end
           return z
         end
       end

I can confirm that defining the function outside the @testset works, but is there an explanation for this, or is it just a quirk?

It might be something that the testset introduces a new hard scope so the f becomes a closure instead of a standalone function and that has some connection with the allocation macro. I am not sure though.

1 Like