Hello, which is the situation for arrays that are immutable but their size is known only at run time ?
I see there is an old discussion here, and then some “experimental” PR that has been rebased several times (here and here)… but as of today, is there a way I can “gain” performances if I “promise” that once I create an array I don’t modify it further ?
Somewhat related: what about mutable, statically sized arrays, like std::array in C++? They are not as limited in size as StaticArrays.MArrays, which are not particularly great beyond roughly 100 elements.
Having recently started with C++, I quite like how std::array and std::vector compose with the const keyword, to create static/dynamic, mutable/immutable arrays.
But this is playing with fire! UB is nearby (and it is unavoidable – the undefined behavior is the point of declaring an array const, this is what permits more compiler optimizations)
Current appetite seems to be low for even more ways to create very hard to debug UB in order to eke out some performance in some cases?
Otherwise CUDA.jl unsafe_free! would be in Base, and would be extensively used for temporaries that are not supposed to escape (I’d be heavily in favor of this one!). And more extensive noalias annotations would also exist.
(also we don’t have UBsan for julia. That’s OK because UB-caused bugs are relatively rare in julia; if it became common then we’d need more tools)
What is the reason exactly for MArrays being less performant at that size? Is it any different from statically sized, “mutable” std::arrays on the stack, I’ve also read that large ones are avoided because of stack overflows? (Mutable in quotes because variable mutability is semantically different from Julia’s instance mutability, though I imagine another language could compose immutability with default-mutable types, or more likely vice versa, like const does with variables).
It is the new backing for Array, but crucially, the size of a Memory itself is fixed. You can’t change that after its creation. When a Vector is resized, what used to happen under the hood is that a new block of memory would be allocated and the existing data would be copied over (all happening in C internals, and handwaving some details of when this resizing doesn’t actually lead to a new allocation). This Memory basically takes that job over, and when a Vector is now resized, a new Memory object is allocated. All the resizing logic of Vector now lives in Julia instead of C, allowing (in principle) the compiler to be smarter about the allocation of that Memory (potentially reusing the existing one entirely, just making the initial allocation of that bigger, etc… lots of room for optimizations there).
Of course, you’re free to use Memory in your own code for things that don’t need to be able to change size. If you don’t ever push! or similar into your Vector, you should be able to more or less use Memory instead. It’s a low-level-ish building block.