[ANN] CliffordAlgebras.jl

I’m pleased to announce the first release of CliffordAlgebras.jl, a package for performing calculations in generic Clifford algebras / geometric algebras. Package registration is still pending.

https://github.com/ATell-SoundTheory/CliffordAlgebras.jl

Any contribution or constructive feedback is very welcome.

Thanks,

Andreas

12 Likes

I don’t know much about geometric algebras, but your package seems to feature the same keywords as Grassmann.jl:

https://github.com/chakravala/Grassmann.jl

1 Like

Thank you for your reply. Yes, Grassmann.jl is also a package that implements aspects of Clifford and geometric algebras. However, the approach is quite different and I believe both packages will have separate fields of application.

4 Likes

Geometric algebra newb here. Could you write a few lines about what those differences are and which areas of application you see? I’m very interested in the potential application of geometric algebra in machine learning frameworks.

3 Likes

From my perspective, Grassmann.jl is a very complete package that is designed to use GA in the context of differential geometry. So you have support for manifolds, their differentiable structure in the form of (co)tangent spaces and their bundles, etc. If you are just interested in geometric algebra and also possibly new to the field, this can be very overwhelming. CliffordAlgebras.jl tries to be a lot more lightweight and accessible by just providing the Clifford algebra and no extras. With CliffordAlgebras.jl you can easily follow the tutorials on bivector.net and work like with most other GA packages for python, JS, C++, etc. As long as you’re not using GA for differential geometry, it should put fewer obstacles in front of you.

That said, CliffordAlgebras.jl is a fresh package, so it’s not been thoroughly battle-tested yet. I’ll try to make sure that any issue reports are resolved quickly.

5 Likes

@atell_soundtheory This looks really nice. There’s a few things that you’ve done here that seem really nice from a data presentation and ergonomic standpoint.

However, if I were to make some performance suggestions, it seems like you’re encoding a bit too much data in the type of the objects. For instance,

julia> cl2 = CliffordAlgebra(2)
Cl(2,0,0)
┌───────┬──────────────┬───────┐
│  +1   │  +e1    +e2  │ +e1e2 │
├───────┼──────────────┼───────┤
│  +e1  │  +1    +e1e2 │  +e2  │
│  +e2  │ -e1e2   +1   │  -e1  │
├───────┼──────────────┼───────┤
│ +e1e2 │  -e2    +e1  │  -1   │
└───────┴──────────────┴───────┘


julia> t1 = typeof(cl2.e1)
MultiVector{CliffordAlgebra{2, 0, 0, (:e1, :e2), ((), (1,), (2,), (1, 2)), (((1, 1), (2, 1), (3, 1), (4, 1)), ((2, 1), (1, 1), (4, 1), (3, 1)), ((3, 1), (4, -1), (1, 1), (2, -1)), ((4, 1), (3, -1), (2, 1), (1, -1)))}, Int64, (2,), 1}

julia> t2 = typeof(cl2.e2)
MultiVector{CliffordAlgebra{2, 0, 0, (:e1, :e2), ((), (1,), (2,), (1, 2)), (((1, 1), (2, 1), (3, 1), (4, 1)), ((2, 1), (1, 1), (4, 1), (3, 1)), ((3, 1), (4, -1), (1, 1), (2, -1)), ((4, 1), (3, -1), (2, 1), (1, -1)))}, Int64, (3,), 1}

julia> t1 == t2
false

which means that you’re going to be causing a lot of dynamic dispatch in general. I think ideally, all elements of a specific Clifford algebra should have the same type in order to efficiently do (runtime) operations with them

Of course, your approach does make it easier to do compile time operations, but generally I find it easier to lift things from runtime to compile time than vice versa.

1 Like

Thank you for your feedback @Mason. I’m aware of the issue, but I don’t think there’s a good alternative. Note that the types of e1 and e2 only differ in a single parameter that encodes that sparse basis used for the coefficient storage. It’s very important to use sparse representations with Clifford algebras, because the number of dimensions grows exponentially with the order of the algebra.

The double conformal algebra in 3 spatial dimensions requires 2^8 dimensions for example. Now the number of elementary arithmetic operations for multiplications in the algebra scale with the square of the dimensions. That means for large algebras, doing sparse operations at runtime is still expensive. I can greatly simplify all the housekeeping and product calculations at compile time, assuming that sparse basis is stable for expressions at compile time. Guaranteeing this stability especially in loops where variables are updated is difficult. But even then, the dynamic dispatch overhead should be smaller than the computational overhead for sufficiently large algebras.

I’m also not finished with the design. I’m planning to use the type system to produce a lazy evaluation mechanism for expressions and I would like the expression optimization to happen at compile-time, too. So the types need to hold the sparse basis information and the expressions will also by encoded by parametric types.

I would be very happy to discuss this design and possible issues with it in depth with a more experienced Julia developer. Especially if you have an alternative that is worth considering, I’d like to hear it.

Thanks,

Andreas

2 Likes

For truly large algebras, it may be the case that the dynamism is okay and inconsequential, but even then I suspect there’s a better solution in terms of just storing a sparse array of the multiplication table and then writing to it on the fly as needed.

However, this approach is really disastrous for performance with small clifford algebras, and small clifford algebras can be just as interesting and important as large ones. In fact, perhaps one good route to respresenting a large clifford algebra would be through sparse kronecker products of small ones.

Setting aside large clifford algebras for the moment though, the approach to getting good performance in small clifford algebras is basically to avoid any dynamic dispatch, and fully unroll the multiplication table. You can see a conversation I had here Encoding group elements and operations - #17 by Mason where I demonstrated how using loop unrolling made this significantly faster than the approach used previously in Grassmann.jl

I have an idea for making this scale more gracefully to intermediate sizes though, let me try and whip up a quick prototype.

1 Like

I am offering a method extend(::MultiVector) that creates a non-sparse but type-stable encoding of a multivector. Together with the @generated products I am already unrolling the multiplication table in this case and there should be no dynamic dispatch happening. But yes, you have to call extend manually when appropriate. You can also get back to a sparse representation with prune(::MultiVector).

1 Like

Here’s what I see for a small Multivector that gets extended:

julia> begin
       f(A, B) = A*(A*A + B*B)*B

       using GeometricAlgebras
       s, γ1, γ2, γ3 = basis_vectors(3)

       A = s + γ1 + 2γ2 - γ3
       B = γ1*γ2 - 3s

       using CliffordAlgebras
       cl3 = CliffordAlgebra(3)

       C = extend(cl3.𝟏 + cl3.e1 + 2*cl3.e2 - cl3.e3)
       D = extend(cl3.e1 * cl3.e2 - 3*cl3.𝟏)
       end;

julia> @btime f($A, $B)
  65.386 ns (0 allocations: 0 bytes)
-75.0 + -115.0γ₁ + -55.0γ₂ + 45.0γ₃ + 45.0γ₁γ₂ + -35.0γ₁γ₂γ₃

julia> @btime f($C, $D)
  980.636 ns (31 allocations: 2.81 KiB)
-75-115×e1-55×e2+45×e3+45×e1e2-35×e1e2e3 ∈ Cl(3, 0, 0)

so there’s over an order of magnitude of performance being left on the table here.

I haven’t looked in depth, but given the amount of allocations, I wonder if there’s still some dynamic dispatch here?

2 Likes

Mason is probably right: your users will compare your implementation with others? That is what we do with Julia and other languages all the time;)

This clearly shouldn’t be happening and I will look into it. Also, this is the first public version and will take a while to grow fully mature, so issues are expected. Have you tried running it a second time to see if there’s compiler overhead?

I’m not doubting that in the slightest.

1 Like

Have you tried running it a second time to see if there’s compiler overhead?

Yes, I’ve run it multiple times and it was the same, though I shouldn’t have to anyways since @btime takes care of this. I’m only measuring the runtime cost.

This clearly shouldn’t be happening and I will look into it. Also, this is the first public version and will take a while to grow fully mature, so issues are expected

Yes of course, no problem. I hope I didn’t sound critical of your work here, I’m very glad to see someone putting some thought into this! Let me know if there’s anything I can do to help out.

Thanks Mason. I know from our previous chat on Discourse that you are vastly more experienced with Julia than I am. So if you could help me diagnose this issue that may well stem from my own misconceptions about Julia’s inner mechanisms, then I would be grateful. I’ve looked at the source code again, and I don’t understand where the dynamic dispatch is coming from. Unfortunately, I don’t have my Julia environment with me, so I cannot test it right now, despite being curious about what’s going on.

3 Likes

I cannot verify it right now, but I guess extend is not type stable. Even though it always returns the same result type for a given algebra, the compiler is probably not able to deduce that. So I assume being more explicit regarding the result type would restore type stability and fix the issue.

No I don’t think that’s the issue since I extend the multivector elements before the benchmark loop.

It’s not the only problem, but it’s definitely a problem, even if it doesn’t have any impact on this particular benchmark. And it appears the same problem arises in the @generated geometric product: The output type parameters are calculated from the input type parameters but the compiler does not produce type stable code because it apparently cannot deduce that the output type only depends on the input types. I’m not sure how to fix this. Maybe making the return type a @pure function of the input types informs the compiler sufficiently?

I’ve actually located the issues that hindered optimal performance. There was a type stability problem with extend that I’ve fixed by using a @generated function. Then there was an expensive assertion inside the MultiVectors constructor and finally a problem with generated functions that take a ::Val{x} argument. Strangely, Julia doesn’t eliminate this code when the generated expression is called but rather allocated an instance Val(x) on the heap. This makes no sense at all to me, because all instances of Val should be singletons and never be allocated anywhere.

The fixes will be pushed as soon as I’ve added some proper testing code. I’ll keep you updated.

julia> using Revise, CliffordAlgebras, BenchmarkTools

julia> begin
              f(A, B) = A*(A*A + B*B)*B

              using CliffordAlgebras
              cl3 = CliffordAlgebra(3)

              C = extend(cl3.𝟏 + cl3.e1 + 2*cl3.e2 - cl3.e3)
              D = extend(cl3.e1 * cl3.e2 - 3*cl3.𝟏)
          end;

julia> @btime f($C,$D)
  0.001 ns (0 allocations: 0 bytes)
-75-115×e1-55×e2+45×e3+45×e1e2-35×e1e2e3 ∈ Cl(3, 0, 0)

julia> @btime f($(cl3.e1),$(cl3.e2))
  0.001 ns (0 allocations: 0 bytes)
+2×e1e2 ∈ Cl(3, 0, 0)
3 Likes

I’ve just pushed and registered CliffordAlgebras 0.1.1 with many fixes and improvements.

3 Likes