Memory allocation when using permutedims

Hi,

I am trying to do some operations on some large tensor (roughly 2^(24)-2^(26) complex elements). I need to use reshape and permutedims frequently. Considering the large size of the tensor I consider, I am a little worried about the memory allocation. My understanding is that when I use reshape it still points to the original data based on julia - How to reshape Arrays quickly - Stack Overflow. However, it seems that when I use permutedims, a copy is created and this leads to extra memory allocation. Is there any way to permute dimensions without causing extra memory allocation?

Thanks

Note there is a difference between reshaping and permuting the dimensions: Reshaping keeps the same “order” of the data and permuting axes does not. Standard arrays in Julia are always “contiguous” meaning their data is in the order corresponding to the indices.

What you are looking for is a lazy version of permutedims. I am not sure whether there is a “canonical” package for this but there is an implementation in Base called PermutedDimsArray and I found transmute from TransmuteDims.jl. Just understand that you make a tradeoff here: You save memory by not copying the array but will likely lose performance when you keep working with the lazily permuted array. For reference:

Two other ideas:

  • Maybe you can keep track of the permutation elsewhere and use this information in your code
  • there is permutedims! which copies the permuted array to a different location in memory. So if you can afford to allocate a temporay array once and keep it around, then you can perform all the permutedims you want and keep the arrays contiguous.
2 Likes