[ANN] TensorAlgebra.jl: Taking covariance seriously

TL;DR

  • Vector spaces are first class.
  • Tensors are linear maps from a vector space to an underlying field t: V\to K
  • Dual spaces are first class with α = Covector(V,[1,2,3])
  • Vectors are degree-1 tensors whose domain is the dual space, i.e. v: V^*\to K.
  • Covectors are degree-1 tensors whose domain is the vector space, i.e. \alpha: V\to K.
  • Partial evaluation of tensors is supported resulting in other tensors.
  • No special indexing is required.

The second to last point is kind of neat. Given a tensor t\in V\otimes W for vector spaces V and W, we get 3 (5 really) maps for free:

  1. t: V^*\otimes W^*\to K,\quad\alpha\otimes\beta\mapsto t(\alpha\otimes\beta)\in K
  2. t: W^*\to V^*,\quad\beta\mapsto t(-,\beta)\in V^*
  3. t: V^*\to W^*,\quad\alpha\mapsto t(\alpha,-)\in W^*
julia> t(α⊗β)
500.0

julia> t(α,β)
500.0

julia> t(-,β)
3-element Tensor{Float64,1,V^*}:
  30.0
  70.0
 110.0

julia> t(-,β) ∈ V
true

julia> t(α,-)
4-element Tensor{Float64,1,W^*}:
 38.0
 44.0
 50.0
 56.0

julia> t(α,-) ∈ W
true

julia> t(α,β) === t(-,β)(α) === t(α,-)(β)
true

Longer version

Hi everyone :wave:

Many of you will remember the epic “Taking vector transposes seriously #4774”.

That was a heartening discussion that shows the Julia community does take this stuff seriously. As a result, we have a pretty awesome LinearAlgebra standard library.

However, as seriously as we took transposes, I still think we can do a little better. There is not a lot of room left in the design space to improve things because LinearAlgebra is already very good, but I think there is still some room. In my mind, this was highlighted in the constructive (also epic) discussion that took place with the PR to LinearAlgebra:

That discussion resulted in the creation of a new package: TensorCore.jl.

In an attempt to summarize my thoughts, I created the issue:

There was additional constructive discussions there. However, at some point, you just need to shut up and write some code. So that is what I did :slight_smile:

The result is TensorAlgebra.jl.

I am watching

with keen interest. The timeline for something like that would be v2.0 at the earliest so I think that would be an opportune time to possibly make some improvements to LinearAlgebra so that higher order tensor algorithms (of use in quantum computing, epidemiology, differential geometry, category theory, etc.) can be implemented more naturally.

Here is a quick walkthrough:

julia> V = VectorSpace(:V,Float64)
V

julia> v = Vector(V,[1,2,3])
3-element Tensor{Float64,1,V^*}:
 1.0
 2.0
 3.0

julia> v ∈ V
true

julia> α = Covector(V,[1,2,3])
3-element Tensor{Float64,1,V}:
 1.0
 2.0
 3.0

julia> α ∈ V^*
true

julia> α(v)
14.0

julia> v(α)
14.0

julia> TensorSpace(V,W)
V ⊗ W

julia> V⊗W
V ⊗ W

julia> t = Tensor((V⊗W)^*,[1 2 3 4;5 6 7 8;9 10 11 12])
3×4 Tensor{Float64,2,V^* ⊗ W^*}:
 1.0   2.0   3.0   4.0
 5.0   6.0   7.0   8.0
 9.0  10.0  11.0  12.0

julia> t ∈ V⊗W
true

julia> α
3-element Tensor{Float64,1,V}:
 1.0
 2.0
 3.0

julia> β = Covector(W,[1,2,3,4])
4-element Tensor{Float64,1,W}:
 1.0
 2.0
 3.0
 4.0

julia> α⊗β
3×4 TensorProduct{Float64,2,Tuple{Tensor{Float64,1,V},Tensor{Float64,1,W}}}:
 1.0  2.0  3.0   4.0
 2.0  4.0  6.0   8.0
 3.0  6.0  9.0  12.0

julia> α⊗β ∈ (V⊗W)^*
true

julia> t(α⊗β)
500.0

julia> t(α,β)
500.0

julia> t(-,β)
3-element Tensor{Float64,1,V^*}:
  30.0
  70.0
 110.0

julia> t(α,-)
4-element Tensor{Float64,1,W^*}:
 38.0
 44.0
 50.0
 56.0

julia> t(-,β) ∈ V
true

julia> t(α,-) ∈ W
true

julia> t(α,β) === t(-,β)(α) === t(α,-)(β)
true

julia> t[2,3]
7.0

julia> (α⊗β)[2,3]
6.0
38 Likes

I tried out the package, there are a few comments Ill like to make, and perhaps worth addressing.

  1. The vector space is “independent” of dimension. So for instance,
julia> V = VectorSpace(:V,Float64);
julia> v = Vector(V,[1,2,3]);
julia> s = Vector(V,[1,2,3,4]);
julia> s ∈ V
true
julia> v ∈ V
true
  1. The implementation “gives up” on type inference easily.
    More precisely, let a \otimes b \in V \otimes W where V, W are \mathbb k-vector spaces.
    Then, a \otimes b + a\otimes b has type Array{\mathbb{k},2} rather than V \otimes W, and
julia> (a⊗b + a⊗b) ∈ (V ⊗ W)
ERROR: MethodError: no method matching iterate(::TensorSpace{Float64,2,(VectorSpace{Float64,:V}(), VectorSpace{Float64,:W}())})
Closest candidates are:
  iterate(::Core.SimpleVector) at essentials.jl:603
  iterate(::Core.SimpleVector, ::Any) at essentials.jl:603
  iterate(::ExponentialBackOff) at error.jl:253
  ...
Stacktrace:
 [1] in(::Array{Float64,2}, ::TensorSpace{Float64,2,(VectorSpace{Float64,:V}(), VectorSpace{Float64,:W}())}) at ./operators.jl:1041
 [2] top-level scope at REPL[13]:1
  1. Due to (2), there is only really support for rank-1 tensors unless you initialise it yourself.

  2. Similar to (2), a\otimes b + b\otimes a doesn’t error but rather tries its best to return you a matrix when dim(V) = dim(W).

1 Like

I like seeing how the Julia community keeps coming up with new ways to represent tensors and tensor operations. It is very relevant to what I’m currently doing in Manifolds.jl, that is support for tensor fields: https://github.com/JuliaManifolds/Manifolds.jl/pull/202 . I’m doing it with a things in mind:

  1. Basic interface should impose as few restrictions on types (as in type system) of vectors as possible. So I’m trying to avoid wrapper types over arrays. So far Manifolds.jl managed to almost entirely avoid them so I’m hopeful here.
  2. Manifolds.jl is not trying to be a computer algebra system. Cool notation is cool but performance and interoperability are more important.
  3. Vector spaces should know the field they are constructed over (real/complex). They don’t yet in Manifolds.jl but they almost surely will.
  4. A tensor is not its array of coefficients in a basis. Sometimes we don’t even have to store the coefficients explicitly (e.g. calculation of values of pushforwards of maps between manifolds on certain tangent vectors).

By the way, I agree that a vector space should, explicitly or implicitly, “know” its dimension. That’s why dimension-less vector space types in Manifolds.jl are called VectorSpaceType. The dimension is fixed once the vector space type is connected to a manifold, forming a vector bundle. Requirements of Manifolds.jl are a bit special here since I think tensors only come up when working with vector bundles.

2 Likes

This looks like it is based on very similar ideas to my AbstractTensors.jl and DirectSum.jl and Grassmann.jl where I am also interested in pursuing similar ideas.

Disagree here. The field type should not be part of the vector space type. In my tensor algebra, the field type is a parameter separate from the vector space parameter. It’s unnecessary to make the field type a part of the vector space, it can be stored separately.

In my TensorAlgebra type system, everything with a graded dimension is subtype of Manifold{n}.

For example, the tensor product space you defined above can be constructed in Grassmann also by using nested tensor elements. It’s a different formalism achieving similar results.

The whole point of my DirectSum package is to make vector spaces available, although I have long since changed the name from VectorSpace to more general TensorBundle types instead.

1 Like

This is still an open point in Manifolds.jl. Would it make more sense to just have complex coefficient and real coefficient bases of vector spaces? I’ll think about it.

In my algebra, a grade G-vector in the SubManifold defined by the V space is Chain{V,G,T}, where T would be the field type (or in MultiVector{V,T}). The field type T is a saparate parameter from V.

I think, mine is inspired from Array{V,T}. In Julia programming, it’s not necessarily good to restrict the type, it often is very good to make generic code. Keeping the V and the T separate helps with staying generic.

1 Like

Hi @Syx_Pek :wave:

Two things…

First, a big thank you for cutting your teeth and trying it out :pray:

Second, a big apology for you getting your teeth cut for trying it :pray::sweat_smile:

A tensor algebra package that only handles tensors of rank 1 is not very interesting :sweat_smile:

I was so happy to get the harder / more fun product stuff working that I failed to add the easier / less fund, but no less important +/- :sweat_smile:

Your items 2., 3., and 4. all come from the same root problem. I didn’t implement + so it was falling back to + for AbstractArrays :man_facepalming:

I’ve fix that now and added your examples as test cases. It should be fine now.

On your item 1, yeah. That was intentional. Adding a dimension to VectorSpace makes total sense from a maths perspective, but it adds some kludge to the code that I thought to avoid. But since that is the most common feedback so far, I went ahead and changed it. So thanks again :pray::blush:

So now, to construct a vector space, you need to give a dimension, e.g.

julia> V = VectorSpace(:V,Float64,3)
V

julia> Vector(V,[1,2])
ERROR: DimensionMismatch("Tried to create a vector of dimension 2 in the vector space V of dimension 3.")

julia> α⊗β+α⊗β
2×3 Tensor{Float64,2,TensorSpace{Float64,2,(U, V)}}:
 2.0  4.0   6.0
 4.0  8.0  12.0

julia> α⊗β+β⊗α
ERROR: DomainError with [1.0 2.0; 2.0 4.0; 3.0 6.0]:
Domain mismatch: Expected TensorSpace{Float64,2,(U, V)}(). Got TensorSpace{Float64,2,(V, U)}().

julia> α⊗β+α⊗β ∈ U⊗V
false

julia> α⊗β+α⊗β ∈ dual(U⊗V)
true

It might be instructive to have a look at the tests:

https://github.com/EricForgy/TensorAlgebra.jl/blob/master/test/runtests.jl

If there is an operation that isn’t tested there, I probably haven’t implemented it yet, so please let me know here or in an issue :pray::blush:

1 Like

Allow me to also advertise TensorKit.jl here as yet another alternative, based on common principles (i.e. vector space objects/types etc). Its more tailored towards tensors in quantum many body physics, as it includes a great deal of support for tensors which are invariant under symmetry actions, but this includes at the very least the ability to deal with covariant and contravariant dimensions/indices.

3 Likes

HI @juthohaegeman :wave:

TensorKit.jl looks awesome. I’m not sure if you saw this:

https://ericforgy.github.io/TensorAlgebra.jl/dev/#TensorKit.jl

I did a super brief survey of some of the other tensor packages I was aware of and, of those, TensorKit shares the most similarities with TensorAlgebra.jl. If there is anything you like about TensorAlgebra.jl and might be interested in incorporating into TensorKit.jl, I’d be happy to try to help :blush:

Hi @anon67531922, thanks for the credit; I did indeed not see this. I did take a look around at your package and the open issues, but did miss the documentation/manual.

1 Like

Here’s my attempt at the same problem: ANN: Tensars.jl: Tensors as linear mappings of multidimensional arrays

I think we have approached this from opposite sides. You’re looking for the closest thing to a mathematical tensor that will fit into a programing language, while I’m looking for the smallest and simplest extension of Julia that can calculate with tensors. Both approaches are worth pursuing.

I’m currently stuck on the suggestion from @mcabbot about identifying tensors with matrices. That led me down a rabbit hole of annotated arrays, broadcasting, matmul and structured matrix types. I should take some time off from that to do an MVP, and to implement the suggestions from @ChrisRackauckas on what’s required for tensors to work efficiently in Julia.

Good to see who else is working on it!

5 Likes

Hi @thisrod :wave:

Thanks for sharing your package. I don’t know how I missed it (maybe the spelling :blush:). I’ll add it to my brief survey when I have some time.

I’d be happy to try to sort this stuff out (that is the point :slight_smile: ). The main issue I have with TensorKit.jl seems to apply to Tensars,jl as well. That is, a tensor

\tau = \alpha\otimes\beta\otimes\gamma

with \alpha\in U^*, \beta\in V^*, \gamma\in W^* (all over a field K) can be thought of a linear map in multiple inquivalent ways, e.g.

  1. \tau: U\otimes V\otimes W\to K\\\quad u\otimes v\otimes w \mapsto \alpha(u)\beta(v)\gamma(w)
  2. \tau: V\otimes W\to U^*\\\quad v\otimes w \mapsto \beta(v)\gamma(w)\alpha
  3. \tau: U\otimes W\to V^*\\\quad u\otimes w \mapsto \alpha(u)\gamma(w)\beta
  4. \tau: U\otimes V\to W^*\\\quad u\otimes v \mapsto \alpha(u)\beta(v)\gamma
  5. \tau: U\to V^*\otimes W^*\\\quad u\mapsto \alpha(u)(\beta\otimes\gamma)
  6. \tau: V\to U^*\otimes W^*\\\quad v\mapsto \beta(v)(\alpha\otimes\gamma)
  7. \tau: W\to U^*\otimes V^*\\\quad w\mapsto \gamma(w)(\alpha\otimes\beta)

It is quite possible I have misunderstood (wouldn’t be the first or last! :sweat_smile:), but I think in both TensorKit.jl and Tensars.jl the above would be treated as 7 different types (since they are different maps) when it is really the same tensor in all cases.

Of the seven maps above (all representing the same tensor), one stands out as being more fundamental than the others, i.e. the first one:

\tau: U\otimes V\otimes W\to K

All the others can be obtained from this one if you introduce partial evaluation, e.g.

1. τ(u,v,w)
2. τ(-,v,w)
3. τ(u,-,w)
4. τ(u,v,-)
5. τ(u,-,-)
6. τ(-,v,-)
7. τ(-,-,w)

I introduced a way to do this in TensorAlgebra.jl so my suggestion is to define a tensor, once and for all, as a map

\tau: TS\to K

for some tensor space TS and field K. In this way, a vector v\in V is a map

v: V^*\to K

and a covector \alpha\in V^* is a map

\alpha: V\to K

and both are first class, i.e. a covector need not be a transpose / adjoint of some vector. So we essentially define a tensor as its domain (tensor space) and codomain (underlying field). NOT as a map \tau: TS_1\to TS_2 for two tensor spaces.

A big reason to write up TensorAlgebra.jl is to try to identify any possible approvements to Base and / or LinearAlgebra to make this important tensor stuff more natural in Julia. The troubles you’ve had and the rabbit holes you’ve been down are evidence, in my mind, that Base and LinearAlgebra need to be revisited a bit.

One idea that has come from this effort so far is similar to what is proposed in

I’m still baking my thoughts a little before I share them, but I think something like this :point_up: would help. I suspect you are being bitten by similar issues. I think something like an AbstractArrayWrapper (maybe not quite like what is proposed in that PR though) type in Base could make this stuff a lot easier.

2 Likes

Tensars does that with matmul. It would be nice to have a syntax for partial contraction that didn’t require unit matrices, but I need to think harder about how to do that.

julia> using Tensars, LinearAlgebra

julia> α = Tensar(rand(3))'
scalar ← 3-vector Tensar{Float64}

julia> β = Tensar(rand(4))'
scalar ← 4-vector Tensar{Float64}

julia> γ = Tensar(rand(5))'
scalar ← 5-vector Tensar{Float64}

julia> T = α⊗β⊗γ
scalar ← 3×4×5 Tensar{Float64}

julia> u = Tensar(rand(3))
3-vector ← scalar Tensar{Float64}

julia> v = Tensar(rand(4))
4-vector ← scalar Tensar{Float64}

julia> w = Tensar(rand(5))
5-vector ← scalar Tensar{Float64}

julia> eye(n) = Tensar(Matrix(I,n,n))
eye (generic function with 1 method)

julia> one = T*(u⊗v⊗w)
0.6573419601057511

julia> three = T*(u⊗eye(4)⊗w)
scalar ← 4-vector Tensar{Float64}

julia> seven = T*(eye(3)⊗eye(4)⊗w)
scalar ← 3×4 Tensar{Float64}

Added for clarity:

julia> u⊗v⊗w
3×4×5 ← scalar Tensar{Float64}

julia> u⊗eye(4)⊗w
3×4×5 ← 4-vector Tensar{Float64}

julia> eye(3)⊗eye(4)⊗w
3×4×5 ← 3×4 Tensar{Float64}

The ‘*’ operator is composition of mappings, analogous with matrix multiplication. In 7, the composition of a scalar ← 3×4×5 mapping with a 3×4×5 ← 3×4 mapping is the desired scalar ← 3×4 mapping.

The nicer interface is probably just u⊗I⊗w. That will require some bookkeeping to keep track of which slots are uniform scalings.

I have some ideas about nested parent arrays too, but I’m still playing with them to see how they work.

1 Like

Cool :+1:

Out of curiosity, what do you think about the notation

τ(u,-,w)

? This is pretty clean to me and is pretty much straight out of classic textbooks. Could Tensars.jl adopt something like that? I’ve never seen that expressed as u\otimes I\otimes w although I appreciate the temptation. I considered that too :slight_smile:

The partial evaluation notation allows you to do nice things like

v* = g(-,v)

to find the unique dual covector of a vector given a metric tensor. The above is working code.

I understand than Tensars.jl supports scalar ← 3×4×5 Tensar{Float64} and so does TensorKit.jl, but my point is that you have too many different types representing the same geometric object. I’m suggesting that this should be the only definition of a tensor and you would never see explicit things like

3×4×5 ← 3×4 Tensar{Float64}

since that should simply be

scalar ← U×V×W Tensar{Float64}

which is unique and unambiguous, but allows for partial evaluation.

I also don’t know if the notation

scalar ← 3×4×5 Tensar{Float64}

is optimal since you are just specifying the dimension. Specifying the vector spaces would be more clear I think, e.g.

julia> α⊗β⊗γ
U×V×W → scalar Tensar{Float64}

and

julia> u⊗v⊗w
U^*×V^*×W^* → scalar Tensar{Float64}

and the dimension can be built into the definition of the vector space.

I agree that it is encouraging to see so many varied efforts going on to implement tensors in Julia and sharing ideas hopefully leads to something better than the sum of the pieces. 1+1=3 :slight_smile:

As I said before, we’re coming at this from different angles, and the ways we’re doing it make sense in different contexts.

How do you get v back from v*? In Tensars, it goes as follows. Note how the vector v and the covector vstar are differently shaped tensars—that’s one reason for tensars to have shapes.

julia> g
scalar ← 5×5 Tensar{Float64}

julia> v
5-vector ← scalar Tensar{Float64}

julia> vstar = g*(eye(5)⊗v)
scalar ← 5-vector Tensar{Float64}

julia> invg = Tensar(rand(5,5),2,0)
5×5 ← scalar Tensar{Float64}

julia> vstarstar = (vstar⊗eye(5))*invg
5-vector ← scalar Tensar{Float64}

Tensars are generalised matrices, and covectors multiply on the left like row vectors.

I’ve used a random matrix as a place holder, because I haven’t implemented inverses yet. (Thanks for reminding me.)

I think that g(-,v) versus g*(I⊗v) is a matter of taste. One notation looks like normal maths, the other is normal Julia code.

How can a 3×4×5 ← 3×4 Tensar be identified with a scalar ← U×V×W Tensar? One is a rank-5 tensor, with 2 vector slots and 3 covector slots, the other is a rank-3 tensor, with 3 vector slots. Are there covector slots in TensorAlgebra?

In Tensars, there are no vector spaces other than Fⁿ. I think that’s an appropriate way to extend an array-based programing language so it can handle tensors.

1 Like

:wave:

Very similar.

vstar = g(-,v)
vstarstar = ginv(-,vstar)
vstarstar ≈ v

It doesn’t matter which slot you use because both g and ginv are symmetric.

One notation looks like normal maths, the other is normal Julia code.

But wouldn’t it be great if normal math and normal code were the same thing? :slight_smile:

The notation

g(-,v)

is perfectly valid code and I think it is perfectly clear what it is doing too :slight_smile:

How can a 3×4×5 ← 3×4 Tensar be identified with a scalar ← U×V×W Tensar ?

Honestly, I’m not sure because the notation 3×4×5 ← 3×4 Tensar is a little opaque to me and I’m not exactly sure what it means (I think I got it now though) :sweat_smile:

One is a rank-5 tensor, with 2 vector slots and 3 covector slots, the other is a rank-3 tensor, with 3 vector slots.

Ah. Good! This is an important issue I think. I think the position of slots is also important so “2 vector slots and 3 covector slots” is not enough info. You need to specify where those slots are,

For example:

u^*⊗v⊗w^*
3×3 ← 3 Tensar{Float64}

has two vector slots and one covector slot (if I understand what you mean by “slot”).

u^*⊗v^*⊗w
3×3 ← 3 Tensar{Float64}

also has two vector slots and one covector slot, but is a different tensor (even if u, v and w are all vectors in the same vector space).

In Tensars, there are no vector spaces other than Fⁿ.

This is fine, but I think it would be good to have the ability to distinguish vector spaces even if they have the same dimension, e.g. position and velocity have the same dimension, but I imagine it could be nice to consider them as elements of different vector spaces (but true - it is strictly not necessary).

Thank you for your comments. This is helpful I think.

OK, what happens when I try g(-,vstar)? If that throws an error, then g and ginv sound like different types of tensor, because ginv(-,vstar) works. In Tensars, g*(I⊗vstar) throws the same DimensionMismatch as rand(25)'*kron(rand(5,5), rand(5)').

The notation is a matter of taste. While g(-,v) is valid code, and makes sense to mathematicians, it could confuse programers. The normal way to curry a function in Julia is x -> g(x,v). Also, Julia is a functional language, so people expect the subtraction function to be passed as an argument to higher order functions, as in map(-, rand(7)). When g is applied to -, most Julia programers will expect the semantics to involve subtraction.

Then you would have even more types of tensor. For the one i×j×k ← p×q Tensar, there are 5!/2!3! = 10 different ways to interleave the vector and covector arguments.

In linear algebra, the covector goes on the left of the matrix, and the vector goes on the right: u'*A*v. In Tensars, the covectors go on the left of the tensar, and the vectors go on the right: (u₁'⊗u₂'⊗u₃')*A*(v₁⊗v₂). In physics notation, the up and down indices do have an order.

In TensorAlgebra, how do you raise the first index of g to get a V*×V → F mapping? How do you convert that to a V×V* → F mapping? Can you represent the V → V* mapping induced by the inner product at all? Tensars will use permutedims, but I haven’t implemented it yet.

I’m a physicist, and it took me a while to notice you said “dimension” and not “dimensions”. :slight_smile: The Julian way is to put the dimensions on the scalars, and the dimension on the array:

julia> using Unitful

julia> [1u"m", 2u"m", 3u"m"]
3-element Array{Quantity{Int64,𝐋,Unitful.FreeUnits{(m,),𝐋,nothing}},1}:
 1 m
 2 m
 3 m

julia> [1u"m/s", 2u"m/s", 3u"m/s"]
3-element Array{Quantity{Int64,𝐋 𝐓⁻¹,Unitful.FreeUnits{(m, s⁻¹),𝐋 𝐓⁻¹,nothing}},1}:
 1 m s⁻¹
 2 m s⁻¹
 3 m s⁻¹

And Tensars just work:

julia> Tensar(ans)
3-vector ← scalar Tensar{Quantity{Int64,𝐋 𝐓⁻¹,Unitful.FreeUnits{(m, s⁻¹),𝐋 𝐓⁻¹,nothing}}}

This is generally the Julian way. The scalars are smart, the arrays store them.

@anon67531922, I certainly agree that there are several maps associated with a single tensor, and it is important to be able to use all of those. However, technically, these are not all the “same” tensor. They are isomorphic via isomorphisms that for normal vector spaces are trivial and thus typically ignored.

Consider a linear map t \in K \to U \otimes V \otimes W, i.e. something we would typically call a rank-3 tensor. This is related to a linear map U^\ast \to V \otimes W by combining t with what is known as the (left) evaluation map (a.k.a co-unit) U^\ast \otimes U \to K. There is similarly a right evaluation map U \otimes U^\ast \to K, and co-evaluation (a.k.a. unit) maps K \to U \otimes U^\ast (as well as K \to U^\ast \otimes U) that enable you to revert these operations. Furthermore, left and right (co)evaluation maps can be related via pivotal structures.

Furthermore, you may also want to swap the arguments of your linear map, i.e. to go from to t \in K \to U \otimes V \otimes W to some linear map K \to V \otimes U \otimes W. For example, you already need this to go to V^\ast \to U \otimes W, as you can only apply the left evaluation map to the left-most vector space in the tensor product, so you first need to swap the order. Swapping is the correct name for regular vector spaces, and is again a trivial operation, hardly worth mentioning. But in more general contexts, such a swapping may not be defined, or only a more general notion of braiding may be defined.

In TensorKit.jl, a tensor t \in (U \otimes V \otimes W \leftarrow K) would be represented by an object of type Tensor{S,3}, which is short for TensorMap{S,3,0}. Here, (Abstract)TensorMap{S,N1,N2} represents linear maps with a tensor product of N1 vector spaces in the codomain, and N2 vector spaces in the domain. S is a type parameter that denotes the type of vector space, which I will not go into here. To construct the associated linear maps, one can use the function permute, with two tuple arguments, i.e. if t isa TensorMap{S,3,0}, then permute(t, (2,3), (1,)) would be a TensorMap{S,2,1} that represents the linear map V \otimes W \leftarrow U^\ast.

Is TensorKit.jl being pedantic by requiring the user to be explicit about which of the linear maps are needed? Not really, as TensorKit.jl aims to also be able to cope with tensors in more general settings, which are beyond the realm of regular vector spaces, e.g. super vector spaces to describe fermionic quantum many body states, or linear morphisms that appear in the context of anyonic theories (technically, tensor fusion categories). In fact, even for normal vector spaces, but where you want to impose some kind of symmetry, such as SU2-symmetry, these notions already matter.

Nonetheless, for those cases where the above isomorphisms are indeed unambiguous (technically, the braiding needs to be symmetric), most daily use of TensorKit.jl could forget about which vector spaces are in the domain, and which are in the codomain, by using e.g. Einstein summation convention as provided by TensorOperations.jl. For example,
@tensor t[a,b,c] = t1[c,f,e,a]*t2[e,b,f]
could be a valid line of code, irrespective of whether e.g. t2 is a TensorMap{_,3,0}, TensorMap{_,2,1}, TensorMap{_,1,2} or TensorMap{_,0,3}.

Similarly, svd(t, (1,3), (2,)) represents the singular value decomposition of the tensor corresponding to taking its first and third index on one side, and its second index on the other side. It is just short for svd(permute(t, (1,3), (2,)).

So in that sense, TensorKit.jl seems to be somewhat of a mix of TensorAlgebras.jl (in that it is disconnected from AbstractArray and LinearAlgebra, and implements a new type hierarchy, also for vector spaces, and then for tensors) and of Tensars.jl (in that its central type TensorMap{_,N1,N2} represents linear maps from a tensor product of vector spaces to a tensor product of vector spaces, where clearly you can multiply/compose a TensorMap{S,N1,N2} with a TensorMap{S,N2,N3} to obtain a TensorMap{S,N1,N3} using the regular multiplication operator *).

4 Likes

My knowledge of Tensors has been approached from the tangent/dual space of Geometric Algebra and Differential Geometry study. All the material I’ve been studying related to Tensors has to have the word “form” appear more times that “covariance”.

Geometrical Methods of Mathematical Physics by Bernard Schutz would be a good example of the material I’ve been using.
There, tensors are defined as linear operators on one-forms and vectors. Which is claimed to be a modern ( circa 1980 ) definition vs the older definition of tensors using covariance and contravariance.

Basis 1-form makes immediate sense to me vs covariant index, which I need to think about too much ( dual space? raised, lowered? basis or coordinate? )

Any thoughts on incorporating forms into the api?

Maybe we need a “Taking coordinate free operators seriously” thread :slight_smile:

1 Like

Well, in my experience with AbstractTensors, DirectSum, and Grassmann I have found many different but equivalent representations for tensors.

This is why I recognized from early on the need for an AbstractTensors package, which contains definitions not specific to any downstream tensor representation.

At the next layer, there is DirectSum to define the notion of Vector space and Dual vector space. In the Grassmann package, the tensors have both covariant and contravariant representation available.

However, I have found recently that I don’t need such a notation for anything practically useful so far. Instead, I construct dyadic tensors, which can be evaluated using the dyadic tensor contraction product.

Using the term “forms” is a better word i agree, but it’s just a matter of word style and how you wish to express yourself. I have a forms syntax concept in Grassmann although I have not worked on finalizing it, since I usually use tensor contraction algebra as a syntax instead.