I have some code, which allocates a larger array, which I store in a named tuple along with other pre-allocated arrays. To save memory, I would like to re-purpose this array tmp_cpx to “host” two smaller arrays, since the large array is not used while the two small ones are.
Using code like this achieves the purpose:
BUT here comes the probelem:
Many routines expect an ordinary array and cannot deal with the resulting type, e.g.: 256×256 reshape(view(::Vector{ComplexF32}, 1:65536), 256, 256)
If you, for example apply an fft_plan generated with fft_plan!, it just silently fails but does not perform the in-place fft, one would hope for.
Is there a way to wrap a fraction of an array in such a way, that fft_plan!, and other functions can work with it? It sound like, one simply needs to exchange the memory pointer inside the array structure, but I am unsure how to achieve such a hack in Julia.
The major thing one needs to be careful about is making sure that sarr1 is not garbage collected. You should probably have all the pointer calls within a GC.@preserve.
GC.@preserve sarr1 begin
sarr1_wrapped = unsafe_wrap(Array, pointer(sarr1), (256, 256);
# Compute with sarr1_wrapped
end
In general, this is going to be UB because it violates TBAA (Type Based Alias Analysis) (see here). Julia assumes that only one object can occupy one region of memory at a time. Creating new objects that then occupy that same region of memory causes exactly that aliasing, so is disallowed.
I don’t think that matters in this case. If the “inner” allocations gets freed by the GC and the memory is reused for a different allocation (even though the memory is still alive through another object, the “outer vector”), you end up with a double-use/use-after-free.
Of course, it’s hard to say this definitively as a user of the language, because the memory model is still not documented…
With such a significant change in v1.11 to a, as you mentioned, not-much-documented aspect of Arrays, it’s hard to say what is or isn’t safe. It is however possible for 1 array type to encompass both allocations and views, that’s how NumPy arrays work. Not really sure if that should be emulated entirely though, non-contiguous views can’t really be treated the same way. Unlike Python’s slices, Julia’s type system does distinguish UnitRanges and StepRanges, so that difference in syntax could help.
From my experience in the SciPy world, I will say it can easily cost too much memory to aggressive make views instead of allocating. Sounds paradoxical and probably doesn’t apply to this carefully managed example, but a few small views can keep a much larger parent array alive too long. Setting up points to copy data isn’t too bad though, and there were easy ways to distinguish views and arrays.