# Creating a tensor (array) in the tensor product of super vector spaces

I want to create a custom array of arrays (each array is called a “block”) indexed by a list of 0 and 1’s. For example, suppose the array has 4 dimensions; then.

• The arrays are indexed by (0,0,0,0), (0,0,0,1), (0,0,1,0), etc.
• The size (shape) of each block depends on whether the index is 0 or 1: for example, the block at (0,1,1,0) has size (d0, d1, d1, d0), and the block at (1,0,0,1) has size (d1, d0, d0, d1).
• Sometimes I require that if the sum of the index array is odd (e.g. (0,0,0,1), (0,1,1,1), (1,0,0,0)), then the corresponding block must be zero.

In more mathematical jargons, the array is a tensor on the tensor product of 4 copies of a vector space V that can be decomposed as the direct sum V_0 \oplus V_1, with \dim V_0 = d_0 and \dim V_1 = d_1.

The array can also possibly be 0-dimensional, i.e. reduce to a single number. How to construct such an array in Julia? Originally I implemented it in Python using a dictionary, but since Julia supports more flexible array indexing, there may be a better implementation of such a structure.

How about converting the binary indices to integers and using a regular vector? You can define a function like

index_to_int(ind) = sum(i->ind[i]<<(i-1), eachindex(ind))


and use it to access the elements of the vector (note I didn’t test the function, it’s just for illustration)? You can allocate each element of the vector to the size you want based whatever rules you have.

1 Like

To start with, there are several packages that support tensors. Depending on your calculation it might be worth using some of them. See for example TensorOperations.jl or Tensorial.jl (just quick google results, I only used the first one so far).

Now, without knowing the application it is difficult to pick a package. Let’s stick to plain Julia then. Of course, there are several ways to make the code more general.

It would maybe help if you could explain your concrete goal in more detail, as it is not clear if you aim at a representation of all vectors in \otimes_{j=1}^4 (V_0 \oplus V1) or if you are interested in tensor which factorize in a particular way. I will answer assuming that you want to consider general tensors.

As @tverho pointed out, it might be best to store the actual information in a straightforward vector, that is, we use V_0 \oplus V_1 \approx \mathbb{R}^{d_0 + d_1} (feel free to replace \mathbb{R} with the field of your choice).

To represent a general tensor, you might use

d0, d1 = 3, 4
d = d0 + d1
Vp = cumsum( [1,d0,d1] )  # stores the direct sum partition
n = 4

T = rand(d,d,d,d)  # or zeros(d,d,d,d), or kron(...,...)


Now, to access a block you can do

function tview(T, Vp, idxs...)
return view(T, ( Vp[i+1]:Vp[i+2]-1 for i in idxs)... )
end


This function extracts the block according to the size via

tview(T, Vp, 0, 0, 1, 1)


Now, from there there are of course many ways to move forward. E.g. you could create a custom type and define the getindex function such that T_custom[0,0,1,1] gives the result you explained. You could modify tview for the checks you talked about, etc.

By the way, a 0-dimensional space contains only the zero element, which is not the same then a single number. In that case, the dimension should either vanish or you can represent it as Float64[] e.g. the vector of size 0.

Thanks for the suggestions. The array can actually be an element in a more general vector space V_1 \otimes V_2 \otimes V_3 \otimes \cdots, where each V_i is actually the Hilbert space of a fermion (a super vector space). They can all be decomposed into V_i^0 \oplus V_i^1, where V_i^0 is the subspace with an even number (0) of fermions, and V_i^1 is the subspace of an odd number (1) of fermions.

The reason I want to store the array in “blocks” is that in many applications only blocks with the sum of indices equal to an even number (e.g. (0,1,1,0), (0,0,0,0)) are nonzero. I want to reduce memory usage. But maybe I can trade this for more efficient operations on the array.

The 0 and 1’s in the index is important when transposing such an array (or called permute in PyTorch; in mathematical terms, this is a reordering of the tensor product of the V's), some minus sign need to be multiplied due to anti-commutativity of fermions. For example, in a 4-dimensional such array, suppose I want to transpose the 2nd and the 3rd axis. Then blocks indexed by (_, 1, 1, _) should be multiplied by -1 (in addition to be transposed). But blocks indexed by (_, 0, 0, _), (_, 0, 1, _) and (_, 1, 0, _) are transposed in the usual way.

I also need to implement “reshape” of the array, i.e. regarding the tensor product of some V_i's as a single larger vector space, which is also decomposed as the direct sum of an even-parity part and an odd-parity part. For example, suppose I regard V_1 \otimes V_2 as a whole. The reshaped array is 3-dimensional in the super vector space (V_1 \otimes V_2) \otimes V_3 \otimes V_4. The last two axes are unchanged. But the elements in the first axis is arranged in the following order: V_1^0 \otimes V_2^0, V_1^1 \otimes V_2^1 (these two form the even-parity part), V_1^0 \otimes V_2^1. V_1^1 \otimes V_2^0 (these two form the odd-parity part). (Well reshaping is a little bit tricky … when I look back at my Python implementation based on dict it was a mess)

I updated the title to better describe what kind of array I want to create. It is also described in Section III and IV of [1610.07849] Fermionic Matrix Product States and One-Dimensional Topological Phases (arxiv.org).

Maybe there are already some packages dealing with such a structure, e.g. Marco-Di-Tullio/Fermionic.jl looks a bit related (but was not updated for a while).

It’s not exactly my field but with respect to the paper, are you trying to store a fMPS, e.g. matrix product states? This might be related MPS and MPO · ITensors.jl.

For sparse representation of tensors, one would maybe use something like a higher order singular value decomposition? See for example here for such types of decompositions: lanaperisa/TensorToolbox.jl.

Anyway, maybe someone with more domain knowledge can help you better here.

Hi Yue,

I would also like to mention TensorKit.jl, which should have precisely what you are looking for, within the same language of the super vector spaces. (written by Jutho, one of the authors of the paper you linked) Additionally, MPSKit.jl has an implementation of matrix product state algorithms built on top of that, and might give you what you are looking for.

1 Like

TensorKit seems to be exactly the thing I want. I will take a look at it. Thanks!