Reaching the 4,194,304 dimensional Hilbert space

As of right now, I have achieved the capability to extend the tensor product space of Grassmann.jl to the 4,194,304 dimensional Hilbert space of 2^{22} by constructing an efficient representation. Because there are 22 indices in the tensor, it is alphanumeric indexing

julia> using Grassmann

julia> Λ(22)
 [ Info: Declaring thread-safe 4194304×Basis{VectorSpace{22,0,0,0},...}
Grassmann.SparseAlgebra{++++++++++++++++++++++,4194304}(e, ..., e₁₂₃₄₅₆₇₈₉abcdefghijklm)

Due to the SparseAlgebra and caching, the TensorAlgebra can have the minimal sparse representation in terms of MultiVector spinors. It takes only about 45-seconds to pre-allocate the sparse representation for 22 dimensions (at lower dimensions it is near instant)

julia> @time Λ(24)
[ Info: Declaring thread-safe 16777216×Basis{VectorSpace{24,0,0,0},...}
ERROR: UndefRefError: access to undefined reference
Stacktrace:
 [1] getindex at ./array.jl:731 [inlined]
 [2] #49 at ./none:0 [inlined]
 [3] iterate at ./generator.jl:47 [inlined]
 [4] collect_to! at ./array.jl:656 [inlined]
 [5] collect_to_with_first! at ./array.jl:643 [inlined]
 [6] collect(::Base.Generator{UnitRange{Int64},getfield(Grassmann, Symbol("##49#50")){Array{Symbol,1}}}) at ./array.jl:624
 [7] Grassmann.SparseAlgebra(::VectorSpace{24,0,0,0x0000000000000000,0}) at /home/flow/.julia/dev/Grassmann/src/Grassmann.jl:150
...

At this point, I am reaching the limits of the Julia language and cannot go any higher.

Is it possible for Julia to reach larger dimensions than this too?

2 Likes

Alright, this issue has now been resolved. Now you can reach Basis elements up to N=61 indices with TensorAlgebra having higher maximum dimensions than supported by Julia natively.

julia> Λ(61)
[ Info: Extending thread-safe 2305843009213693952×Basis{VectorSpace{61,0,0,0},...}
Grassmann.ExtendedAlgebra{+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++,2305843009213693952}(v, ..., v₁₂₃₄₅₆₇₈₉abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ)

julia> Λ(61).v32a87Ng
-1v₂₃₇₈agN

The 61 indices require full alpha-numeric labeling with lower-case and capital letters. This now allows you to reach up to 2,305,843,009,213,693,952 dimensions with Julia using Grassmann… the volume element is

v₁₂₃₄₅₆₇₈₉abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ

However, Julia is only able to allocate a full MultiVector for N≤22, so only sparse calculations are supported at higher dimension.

So, I still wonder if Julia can be modified to allocate larger arrays.

Nice. But I guess it will be a while before we have computers which can handle a dense 61-array: if it is size 2x2x2… then about 10^9 GB right?

Luckily, no such array is needed, since this uses sparse multivector representation instead of matrices.

Sure! Maybe you are saying that you’d like to do sparse (or symbolic) things with 100 indices? What sets the limit at 61, once you have circumvented Julia’s 22 (if indeed 22 is the limit for normal Arrays)? Could your code just add on the greek alphabet?

Indeed, the only limitation is our ability to think of constructs and parametrizations which extend so far. In this case, I found that it is suficient to work with a 64 bit representation (which is the default). And it turns out that with 61 standard keyboard characters, this fits nicely. Since 22 is the limit for the largest fully representable MultiVector with Julia, having a 64 bit representation still lets you extend to 44 generating Basis elements if you suddenly want to decide to have a dual vector space also.

julia> V = ℝ^22
++++++++++++++++++++++

julia> Λ(V+V')
[ Info: Extending thread-safe 17592186044416×Basis{VectorSpace{44,0,0,17592181850112}*,...}
Grassmann.ExtendedAlgebra{++++++++++++++++++++++----------------------*,17592186044416}(v, ..., v₁₂₃₄₅₆₇₈₉abcdefghijklmw¹²³⁴⁵⁶⁷⁸⁹ABCDEFGHIJKLM)

I see, thanks. So a sorted product of basis vectors can be labelled neatly by an Int64 treated like a BitSet, and to go higher you’d have to unpack 128 into two, or something.

Do you have an application for this, out of curiousity? Many are content to stop at 11 dimensions… but perhaps for quantum computing with 61 bits?

Yes, I do have some applications (in lower dimensions currently as you said). However, aside from that, I am just interested in general purpose computing, including quantum computing and conformal geometry. If I have 64 bits available, I figured I might as well try to reach its full potential.

Actually, it defaults to the bit depth of the computer, so if you are 32-bit it uses that.

At 22 dimensions and lower, you have better caching, and 8 dimensions or less have extra caching

1 Like

Regarding large (sparse) vectorspaces: Linear algebra over sparse vectors indexed by potentially large bitstypes is useful. I have settings where the index can well eat 128 - 512 bits (some of them with algebraic meaning, some of them pointers, first 32 bits are hash of the rest to permit efficient rehashing). This means that the underlying structure wants to be a dictionary.

Do we have packages for dict-backed sparse linear algebra?

For 128-bit there is UInt128, which is relatively easy to switch to in Grassmann, since the definitions can changed in one place:

I just went ahead and added the greek characters, to be prepared for the future of 61+48=109 indices

https://github.com/chakravala/Grassmann.jl/blob/master/src/utilities.jl#L14-L21

However, since some of the calculations rely on binomial and other functionality, it is not going to work correctly at 128-bits immediately because extended precesion must be used for calculations.

Indeed, although at such high bit depths you would need to be working with very sparse calculations, and not be able to handle highly entangled states. Theoretically, one should be able to extend it this far too with extended precision calculations, but this would require extra design considerations.

@improbable22, the index printing has now been separated out into a package, along with the vector space definitions into packages DirectSum and AbstractTensors which help interoperability.

To help provide commonly shared and readable indexing to the user, some print methods are provided:

julia> DirectSum.printindices(stdout,DirectSum.indices(UInt(2^62-1)),"v")
v₁₂₃₄₅₆₇₈₉₀abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ

julia> DirectSum.printindices(stdout,DirectSum.indices(UInt(2^62-1)),"w")
w¹²³⁴⁵⁶⁷⁸⁹⁰ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz

An application of this is in the Grasmann package, where dual indexing is used.

However, now there is also AbstractTensors which is going to help with interoperability, as soon as it gets registered along with the other new DirectSum package gets registered (i’m waiting for).

May I ask what the typical applications of these high dimensional tensor are? This looks seriously impressive, but I’m totally stumped on the applications.

The main applications at very high dimensions I know of are related to quantum computing, number theory and cryptography (in those applications, it is desirable to be able to compute much beyond).

However, another way it is useful is for doing research in pure mathematics and abstract algebra. My main goal is to study abstract mathematical constructions and to make their study accessible. For example, by adding extra dimensions using conformal projective geometry, many difficult problems can be linearized into much simpler representations and operations.

Performing very higher dimensional projective geometry would typically have been very difficult to do without a computer, and would have been limited to very abstract reasoning historically. Being able to actually compute with them efficiently should open up the doors to more discoveries.

For example, mathematicians like to study something called E8 lattice and Leech lattice. This are extremely high dimensional objects in current research related to “sphere packing” problems.

There is currently ongoing research into unimodular lattices from a geometric algebra perspective.

2 Likes

Thanks! And thanks for that reference. I have a little reading to do. Fascinating stuff!