`eigen`

is slower than `svd`

, unless its a hermitian matrix. Probably @Oscar_Smith is correct for factorisations. For multiplying matrices, even though the multiplication is also O(N^3), while the required memory for the result is O(N^2), I do think it can make a difference, just because this O(N^3) has been so optimised (and has a much smaller prefactor than in the svd or eigen case).

It has been my experience that the Julia’s GC, despite certainly having been much improved, still has some overhead when a lot of big temporary arrays are involved, in comparison to Matlab or Python. But I would need to do more thorough benchmarking and find a good test case; it’s hard to carefully compare this in a way that excludes all other possible causes. It is however a fact that in TensorOperations.jl I added a package wide cache structure specifically for storing intermediate arrays in tensor contractions, and (provided they all fit in memory), this can improve run time with a significant fraction.