Memory allocation in member extraction from a struct?

A common idiom for me when operating on a struct is to assign local names for members of the struct. That is, if the argument is A with members foo, bar, and baz I often begin a method with

function myfunc(A)
    foo, bar, baz = A.foo, A.bar, A.baz
    ...
    A
end

The purpose is to avoid repeated extraction (the compiler may optimize it away but I don’t think that is a certainty) and to make the code somewhat cleaner.

I notice when checking allocations with --track-allocations that there is apparently a considerable amount of allocation at those lines. Am I shooting myself in the foot by doing this? I was under the impression that this would just assign some pointers or, at worst, scalars in the evaluation frame of the method.

Is it best to avoid assigning a tuple when doing this?

I have only tested in version 0.5.0 - my package has dependencies that are not satisfied in 0.6.0-dev yet.

1 Like

An easy way to do this is to use the @unpack macro from Parameters.jl

I never found performance issues by doing it.

Tuples of non isbits types are allocated on the stack (edit: I mean heap) so unless the compiler elides it, it is possible that is what you are seeing.

2 Likes

In Parameters.jl I “write it out” (I think I did this to avoid the problem @kristoffer.carlsson mentions):

julia> using Parameters

julia> macroexpand(:(@unpack a, b, c = d))
quote  # /home/mauro/.julia/v0.5/Parameters/src/Parameters.jl, line 559:
    ##270 = d # /home/mauro/.julia/v0.5/Parameters/src/Parameters.jl, line 560:
    begin 
        a = Parameters.unpack(##270,Val{:a}())
        b = Parameters.unpack(##270,Val{:b}())
        c = Parameters.unpack(##270,Val{:c}())
    end # /home/mauro/.julia/v0.5/Parameters/src/Parameters.jl, line 561:
    ##270
end

@dmbates, does writing it out help in your case?

minor typo: I believe you mean “heap” instead of “stack” here.

1 Like

Yes, thank you.

It doesn’t seem to make a difference if I expand the tuple assignment to separate assignments.

The exercise was worthwhile as I did find a section of the code that was allocating a huge amount of memory unnecessarily, which is the whole point of the JuliaCI group, whom I thank for their efforts.

I worry that returning a tuple in the first place might force an extra allocation when the elements are not isbits:

julia> using BenchmarkTools

julia> function no_tuple()
           x = rand(100)
           return x
       end
no_tuple (generic function with 1 method)

julia> function yes_tuple()
           x = rand(100)
           return (x, )
       end
yes_tuple (generic function with 1 method)

julia> @benchmark no_tuple();

julia> @benchmark yes_tuple();

julia> @benchmark no_tuple()
BenchmarkTools.Trial:
  memory estimate:  896.00 bytes
  allocs estimate:  1
  --------------
  minimum time:     194.274 ns (0.00% GC)
  median time:      314.712 ns (0.00% GC)
  mean time:        531.899 ns (4.62% GC)
  maximum time:     119.999 μs (0.00% GC)
  --------------
  samples:          10000
  evals/sample:     685
  time tolerance:   5.00%
  memory tolerance: 1.00%

julia> @benchmark yes_tuple()
BenchmarkTools.Trial:
  memory estimate:  912.00 bytes
  allocs estimate:  2
  --------------
  minimum time:     202.074 ns (0.00% GC)
  median time:      309.267 ns (0.00% GC)
  mean time:        374.682 ns (8.69% GC)
  maximum time:     28.957 μs (0.00% GC)
  --------------
  samples:          10000
  evals/sample:     580
  time tolerance:   5.00%
  memory tolerance: 1.00%
1 Like