Need help understanding performance hit of Arrays


Hello! I have been using Julia regularly for some months and I am starting to have a good mental map of the way Julia works internally, meaning, which operations can be optimized by the compiler and how to write legible fast code.
However, I need some help regarding non-primitive and mutable types. Mostly, Array and String, which are the ones I use the most.

Array alone

In my head, SArray objects are fast because the size is known by the compiler and fixed throughout the program, and thus it can be allocated in the stack. However, in an Array, you can always pop or append an element. Thus, its size is not fixed, and allocated in the heap (?), which makes it slower to use.

Now, say I have to read from a file where the number of entries is unkwnown e.g.: the result of a simulation. Is it better to preallocate a fixed amount of space and read the file into it, or append operations are as fast as that, and without the hassle of not having enough space or having too much?

For example, DelimitedFiles has a mode in which lines are read from a file until the end is reached. Is that as fast as reading a fixed amount of lines?

Array in a composite type

When a composite type is made up enitrely of primitive types, the size is fixed and known beforehand so, again, it may be placed in the stack. However, if an Array is field of a composite type (even a non-mutable one), its size will not be known. How will this affect performance?


Thank you in advance for the answers! This is such a nice community :smiley:

1 Like

Just to point out, variables on the heap are not slow, it’s the act of allocating (and freeing) that is “slow”. So if you are allocating heap memory inside a performance critical loop things are going to slow down. AND the garbage collector is going to need to run to ensure that you don’t run out of memory, which consumes more of the CPU.

With arrays if you can pre-allocate them by using sizehint!() or allocating them at their max size this can help significantly both because of reallactions put probably more because some reallocations might need to copy the data from the old memory location to the new.

Loading data from a file is kind of funny, if the files are small (for some value of small) then the disk access is probably going to be slower than the array growing. However if you are loading gigabytes into RAM then the reallocations are probably going to kill you. When an array grows from 1GB to 2GB most likely that 1GB of memory would need to be copied to the new location…same when it grows to 4GB…more copying.

So I would say if your arrays are smallish (1MiB? Maybe 2?) then then growing them shouldn’t be much overhead. Or wasted space if you are doing sizehint(). If the arrays are huge then allocating once will really help.

Of course the Profiler has the end word in all this. Profile the code see where the slow downs are.


uhh, no, in fact, if you are using the Array from Base but it is a bi-dimensonal Array{T, 2} where {T} then it will already no allow pop! nor append! because this would break the matrix layout. Also, both pop! and append! fail on for SArray/MArray (even if one-dimensional, i.e., vectors).

Preallocation will always be better if you know how much to preallocate. If you do not, then other means are more practical and probably have about the same performance you would be able to achieve without knowledge of the size.

This does not mean it does not use buffering internally. This should be investigated before jumping conclusions.

I do not think this is how things work. I believe the size of the struct is always know. The Array field has the pointers to the memory in the heap, and will have a fixed size, that does not depend on the size of the Array itself.


Thank you all for.your answers!! I understand it better now :smile: