Basic question about memory allocation

Hi there,

I’m very new to Julia. I’m not even sure this is the right place for asking my question. Let me know if it isn’t! (Btw if I have further questions that relate to Jupyter or Pluto, should I ask them elsewhere?)

My question is about when objects are copied, and when they are not.

(1) I read somewhere that “assignments never create copies”. So x= y should absolutely never allocate more memory. Is that really a formal rule of the language, that I can count on?

It seems natural for mutable objects (you know that changing x will change y and vice-versa), but for immutable objects, it’s harder to believe. So x= 5; y= x will cause y to point to the exact same memory block as x?? (Immutable objects can be huge, of course, so this can matter…)

(2) I have made some experiments, and it seems that, when you call a constructor, and the fields are immutable, then copies are, this time, made (with one exception, see below). Say

struct TwoInts
    a::Int64
    b::Int64
end

Then if x= 5 and you try TwoInts(x,x), it seems that the memory allocated is that needed for two ints, not just one (I’m using Base.summarysize for this, which may be wrong…). So copies are made, and calling a constructor is very different from an assignment. Just wanted to check with you that I’m not talking nonsense up to here.

Also, if the fields are of mutable type, no copies are made.

All this is what I expected, I guess, but since I expected something perhaps different with x= y, I’m checking :slight_smile:

(3) Strings are an exception, it seems! Say

struct TwoStrings
    a::String
    b::String
end

Try then s= "a sufficiently long piece of string". Then the size of TwoStrings(s,s) is less than twice the size of s (!). Also, TwoStrings(s, "") takes more space than TwoStrings(s,s) (!!!).

What’s going on? I appreciate that strings are perhaps optimized in various ways, it’s only natural – but still, it would be an exception to the rule “calling a constructor makes copies for the fields of immutable type”, and that’s confusing.

Thanks for your help!

Pierre

1 Like

A copy of a value has semantical meaning. A mutation of a copy does not mutate the original object. “Allocate memory” does not really have semantical meaning. Julia will do its best to give you the best performance is pretty much what can be said about that.

They could point to the same place in memory. Or it might be better to pass a “copy” in a register. Or the use of the variable is completely eliminated. Again, none of this affects how your program behaves so it is just optimizations. And Julia has a good optimizer :slight_smile:

5 Likes

Why is that surprising? In the first case you have two objects so you need space for both of them. In the second case you only need space for s. The second case would be Base.summarysize("") bigger.

2 Likes

Well, it is surprising in that it shows that what I say in t2) is false. More generally, I understand what you mean with both your answers (thanks for these!), but it is frustrating not to be in control… I would be more confortable with a rule that always applies (“calling a (default) constructor creates copies in the immutable fields” would be such a rule, but it’s not one, apparently).

I have an example ready where it actually matters, perhaps I should go into that? (it’s a bit long…)

Probably you should try to show a MWE where that matters.

But note that you are not in control of that using any language (with assembler perhaps?). What will happen with the immutable objects will finally be decided by the compiler, as it might be optimal, as mentioned, that they not even exist. For example:

function f(x)
   y = 1
   return x + y
end


julia> @code_llvm f(1)
;  @ REPL[1]:1 within `f'
define i64 @julia_f_309(i64 signext %0) {
top:
;  @ REPL[1]:3 within `f'
; ┌ @ int.jl:87 within `+'
   %1 = add i64 %0, 1
; └
  ret i64 %1
}

Compare with:

julia> function f(x)
          y = 1
          z = y
          return x + z
       end
f (generic function with 1 method)

julia> @code_llvm f(1)
;  @ REPL[4]:1 within `f'
define i64 @julia_f_344(i64 signext %0) {
top:
;  @ REPL[4]:4 within `f'
; ┌ @ int.jl:87 within `+'
   %1 = add i64 %0, 1
; └
  ret i64 %1
}


1 Like

Okay, let’s look at t2 then:

You have to be very precise when you say “the memory allocated” and “copies are made”. This is dependent on optimizations and how the struct is used. Also, you must be clear what summarysize actually calculates. In addition, having some knowledge of the distinction between bitstypes and non-bitstypes (Essentials · The Julia Language) is useful.

For example, in a function, when you create a TwoInts no allocation is made. This is because already when calling the function, enough space has been allocated on the stack (memory management - What and where are the stack and heap? - Stack Overflow) to hold the TwoInt struct created. So creating the struct effectively just means writing some values. But maybe this has been optimized away already? Another use case is if you push a TwoInts to a vector. Then that vector has to grow with enough size to hold the size of the struct (2 integers).

My main point here is that you need to be a bit careful when you talk about these things and measure them because they quite context dependent and you need to know what the numbers you get out actually mean and when they are relevant.

6 Likes

Thanks for bearing with me :slight_smile:

Well, here is an example of an implementation of chained lists (as an exercise!), where everything is immutable, in a Pluto notebook. And here is a version with mutable structs, in a Jupyter notebook.

In the immutable version, I think that if L is a chained list and I go NonEmplyList(a, L) (or a --> L), to add an element at the beginning of the list, then L gets copied. I’d like to know whether this is the case, and more importantly, I’d like to know why, and how I could have predicted the behaviour.

If the answer is “it depends”, then well, I’m uncomfortable.

I think the problem is that “gets copied” is not something well defined there. You are doing something like this, I think:

julia> add_element(x,el) = (x...,el)
add_element (generic function with 1 method)

julia> add_element((1,),2)
(1, 2)

julia> @allocated add_element((1,),2)
0

Clearly that does not allocate anything, meaning nothing new is created on the heap. But you are asking if the number 1 that belongs to the input tuple (of one element) is the “same” number 1 that belongs to the tuple (1,2) of the output tuple. I think the answer is almost certainly not, but if your program did nothing with the second element of the tuple the compiler could maybe optimize the code and remove completely the call to add_element, in which case it would be "the same 1". Yet, if the code called sum on the resulting tuple, there will be a copy (on the stack or register) because of course the sum is efficient if the elements are not spread on the memory.

yet that does not allocate either:

julia> @allocated sum(add_element((1,),2))
0

because that manipulation of variables occurs on the stack (or register), anyway.

So, unless your immutable object is very large (in which case the compiler may decide to store it on the heap), everything occurs as if nothing is copied. But that is not necessarily true depending on what one considers to be a copy. What is generally important is if new things are or not allocated on the heap memory.

2 Likes

As an example (using the code in the notebook in Basic question about memory allocation - #7 by pg261):

julia> L = 4 --> -2 --> 6 --> 8

julia> @btime NonEmptyList(3, $L)
  0.013 ns (0 allocations: 0 bytes)

This type of extremely short timing is when the Julia compiler has optimized everything away and the function does nothing.
So clearly, “it depends” will always be the case in presence of an optimizing compiler.

If you want something more interesting you could try out something like:

julia> function test_f(N)
           L = 1 --> 2
           for _ in 1:N
              L = NonEmptyList(rand(1:10), L)
           end
           return L
       end;

julia> for i in 1:6
           @btime test_f($(10^i))
       end
  87.546 ns (10 allocations: 320 bytes)
  770.209 ns (100 allocations: 3.12 KiB)
  8.071 μs (1000 allocations: 31.25 KiB)
  79.746 μs (10000 allocations: 312.50 KiB)
  802.445 μs (100000 allocations: 3.05 MiB)
  8.291 ms (1000000 allocations: 30.52 MiB)

where you can see that the time and allocation is linear in N (and not quadratic).

7 Likes

@lmiq : your post goes to show that there’s a lot I don’t understand about the heap, the stack and all this!

@kristoffer.carlsson : I’m amazed that the allocations should be linear in N. The way “structs” are stored in memory must be very different from what I had assumed. Just because I read somewhere that in Julia, custom types are just as efficient as primitive types, I had assumed that the (immutable) entries of a struct were simple-mindedly stored next to one another (perhaps aligned) in memory. But the sort of optimization which goes on here would be impossible.

The optimization is impressive, but I think I might have liked a little less performance, and a little more predictability. Also, I have a mind to always use mutable structs now, or at least whenever I am in doubt, because I can tell what’s going to happen beforehand.

1 Like

I have been there. I am a previous Fortran programmer (never been a computacional scientist, so all these was new to me when I started using Julia). In Fortran one declares all variables, so we have the impression that everything is mutable. We think that assigning a value to a variable always means the same thing. Ambiguities do not appear to the user because there is no assignment to a variable that was never before defined (since all variables are declared). One feels safe there.

However, I was taught, here, that all that was an illusion. :slight_smile: . Actually, in Fortran or here, it will be the compiler which will decide if an assignment to an immutable is the mutation of a previous variable in a specific place in memory or if it is a new creation of that value somewhere. If that “somewhere” is not the heap, the program does not “allocate” anything, and everything occurs as if nothing was copied.

In any case intending to control that is illusory. At the end the values will be transferred (copied) from wherever they are to the processor register. So you will always be copying data somewhere. For performance you want to avoid copying data to and from the heap memory only. And to avoid that one should prefer immutable values as much as possible.

I have even written something (essentially for myself) to try to understand that here. Don’t take it too seriously, I am no expert on that. But I think that one ends getting the feeling of how all that works and what we should expect, worry about, and do to optimize code.

(ps: and one thing that is always a headache is to try to work with immutable types that change size all the time. That should be avoided, because the immutable variables have in their constant size the defining property that allows the compiler to optimize stuff).

7 Likes

This is for types that are isbitstype (which is why I linked to that section earlier, Essentials · The Julia Language) but not for general immutable types.

2 Likes

Thanks for the link to your notes! they are very useful!

1 Like