[ANN] CliffordNumbers.jl: fast, static geometric algebra with multivectors as Number subtypes

CliffordNumbers.jl is a new package for working with Clifford algebras (or geometric algebras) and their elements.

Design philosophy

AbstractCliffordNumber{Q,T<:Union{Real,Complex}} type subtypes Number, just as Complex and Quaternion from Quaternions.jl do. Just as we consider the complex numbers and quaternions an extension of the real numbers, I have taken the stance that Clifford algebras also comprise extensions of the real or complex numbers.

The algebra is specified by a type parameter Q, and this can be anything that behaves like CliffordNumbers.Metrics.AbstractSignature. You can use one of the signature types that the CliffordNumbers.Metrics submodule provides (such as VGA, PGA, CGA, or the generic Metrics.Signature), or derive your own subtype of Metrics.AbstractSignature.

The concrete subtypes provided by this package are:

  • CliffordNumber{Q,T,L}, which explicitly includes the coefficients basis blades of all grades.
  • CliffordNumbers.Z2CliffordNumber{P,Q,T,L}, which represents multivectors with the only nonzero basis blades being even (if P === false) or odd (if P === true), referencing the Z₂ grading of the algebra.
    • EvenCliffordNumber{Q,T,L} = CliffordNumbers.Z2CliffordNumber{false,Q,T,L}
    • OddCliffordNumber{Q,T,L} = CliffordNumbers.Z2CliffordNumber{true,Q,T,L}
  • KVector{K,Q,T,L}, which includes basis blades of grade K only.

These types have promotion rules defined between themselves and other Real and Complex types, allowing for seamless interoperability for all operations they share, such as addition, subtraction, and multiplication.

AbstractCliffordNumber instances behave as scalars for purposes of broadcasting, just like other Number subtypes. For this reason, eltype(C::AbstractCliffordNumber{Q,T}) = C, so we provide the function scalar_type to extract T, scalar_promote to promote all arguments to shared scalar types without converting the entire representations, and scalar_convert to convert only the scalar type of an AbstractCliffordNumber.

However, AbstractCliffordNumber{Q} instances can be indexed using BitIndex{Q} objects, tuples thereof, or BitIndices{Q,C<:AbstractCliffordNumber{Q}} objects, which represent all BitIndex{Q} objects associated with C. Many common operations, such as the reverse and conversion, are implemented in terms of indexing operations, either by using a lazily evaluated function associated with BitIndices{Q,C} (described by TransformedBitIndices{Q,C}) or by using the BitIndices object of a different AbstractCliffordNumber type from that of the object being indexed.

Dependencies

This package no dependencies besides Julia Base. The products of the geometric algebra are not implemented with matrix multiplication, but in a representation-free manner that does not require the use of any matrices. I am not opposed to taking dependencies on other packages or spinning off functionality (such as the Hamming weight tools used for blade indexing) into different packages; it just hasn’t been necessary as of yet.

Performance

The whole goal of this package is to provide the maximum possible performance for geometric products and other types of products that may be found in a geometric algebra. This package leverages a generated function, CliffordNumbers.mul(x::AbstractCliffordNumber{Q,T}, y::AbstractCliffordNumber{Q,T}, F::CliffordNumbers.GradeFilter) that generates extremely fast multiplication kernels for any algebra represented by Q. The CliffordNumbers.GradeFilter object is used to determine whether certain multiplication operations need to be filtered out: for instance, the wedge product drops all multiplications between blades which share a 1-blade component, so CliffordNumbers.GradeFilter{:∧}() conveys this in a way that can be seamlessly incorporated into the final expression output by CliffordNumbers.mul.

For reference, here is the BenchmarkTools.jl output of the geometric product of two CliffordNumber{VGA(3),Float32,8} instances on a Surface Pro 7 (Core i7-1065G7):

julia> a = CliffordNumber{VGA(3),Float32}(1, 2, 3, 4, 5, 6, 7, 8)
8-element CliffordNumber{VGA(3), Float32}:
1.0 + 2.0e₁ + 3.0e₂ + 5.0e₃ + 4.0e₁e₂ + 6.0e₁e₃ + 7.0e₂e₃ + 8.0e₁e₂e₃

julia> @benchmark $a * $a
BenchmarkTools.Trial: 10000 samples with 1000 evaluations.
 Range (min … max):  4.907 ns … 97.845 ns  ┊ GC (min … max): 0.00% … 0.00%
 Time  (median):     5.061 ns              ┊ GC (median):    0.00%
 Time  (mean ± σ):   5.665 ns ±  4.068 ns  ┊ GC (mean ± σ):  0.00% ± 0.00%

  █▇▃      ▁                                                 ▁
  ███▅▃▁▃▁▁█▆▃▅▆▄▃▄▃▅▄▅▆▅▁▄▅▁▄▃▁▃▁▃▁▁▃▁▁▁▃▄▃▄▁▁▃▁▄▄▆▁▃▃▃▅▁▁▄ █
  4.91 ns      Histogram: log(frequency) by time     24.2 ns <

 Memory estimate: 0 bytes, allocs estimate: 0.

@code_native shows that CliffordNumbers.mul is not inlined in the definition of *:

julia> @code_native debuginfo=:none a*a
        .text
        .file   "*"
        .globl  "julia_*_2313"                  # -- Begin function julia_*_2313
        .p2align        4, 0x90
        .type   "julia_*_2313",@function
"julia_*_2313":                         # @"julia_*_2313"
# %bb.0:                                # %top
        push    rbp
        mov     rbp, rsp
        push    rbx
        sub     rsp, 40
        mov     rbx, rdi
        movabs  rax, offset j_mul_2315
        lea     rdi, [rbp - 40]
        call    rax
        vmovups ymm0, ymmword ptr [rbp - 40]
        vmovups ymmword ptr [rbx], ymm0
        mov     rax, rbx
        add     rsp, 40
        pop     rbx
        pop     rbp
        vzeroupper
        ret
.Lfunc_end0:
        .size   "julia_*_2313", .Lfunc_end0-"julia_*_2313"
                                        # -- End function
        .section        ".note.GNU-stack","",@progbits

But it does show that CliffordNumbers.mul consists of vectorized operations:

julia> @code_native debuginfo=:none CliffordNumbers.mul(a, a, CliffordNumbers.GradeFilter{:*}())
        .text
        .file   "mul"
        .section        .rodata.cst32,"aM",@progbits,32
        .p2align        5                               # -- Begin function julia_mul_2408
.LCPI0_0:
        .long   5                               # 0x5
        .long   4                               # 0x4
        .long   7                               # 0x7
        .long   6                               # 0x6
        .long   1                               # 0x1
        .long   0                               # 0x0
        .long   3                               # 0x3
        .long   2                               # 0x2
.LCPI0_1:
        .long   7                               # 0x7
        .long   6                               # 0x6
        .long   5                               # 0x5
        .long   4                               # 0x4
        .long   3                               # 0x3
        .long   2                               # 0x2
        .long   1                               # 0x1
        .long   0                               # 0x0
        .text
        .globl  julia_mul_2408
        .p2align        4, 0x90
        .type   julia_mul_2408,@function
julia_mul_2408:                         # @julia_mul_2408
# %bb.0:                                # %top
        push    rbp
        mov     rbp, rsp
        mov     rax, rdi
        vmovups ymm0, ymmword ptr [rdx]
        vxorps  xmm1, xmm1, xmm1
        vfmadd231ps     ymm1, ymm0, dword ptr [rsi]{1to8} # ymm1 = (ymm0 * mem) + ymm1
        vpermilps       ymm2, ymm0, 177         # ymm2 = ymm0[1,0,3,2,5,4,7,6]
        vmulps  ymm2, ymm2, dword ptr [rsi + 4]{1to8}
        vaddps  ymm1, ymm2, ymm1
        vpermilpd       ymm2, ymm0, 5           # ymm2 = ymm0[1,0,3,2]
        vmulps  ymm2, ymm2, dword ptr [rsi + 8]{1to8}
        vaddps  ymm3, ymm1, ymm2
        vsubps  ymm1, ymm1, ymm2
        vpermilps       ymm2, ymm0, 27          # ymm2 = ymm0[3,2,1,0,7,6,5,4]
        vmulps  ymm2, ymm2, dword ptr [rsi + 12]{1to8}
        vsubps  ymm3, ymm3, ymm2
        vaddps  ymm1, ymm1, ymm2
        vblendps        ymm1, ymm3, ymm1, 170           # ymm1 = ymm3[0],ymm1[1],ymm3[2],ymm1[3],ymm3[4],ymm1[5],ymm3[6],ymm1[7]
        vpermpd ymm2, ymm0, 78                  # ymm2 = ymm0[2,3,0,1]
        vmulps  ymm2, ymm2, dword ptr [rsi + 16]{1to8}
        vaddps  ymm3, ymm1, ymm2
        vmulps  ymm4, ymm0, dword ptr [rsi + 20]{1to8}
        vsubps  ymm1, ymm1, ymm2
        movabs  rcx, offset .LCPI0_0
        vmovaps ymm2, ymmword ptr [rcx]
        vpermps ymm2, ymm2, ymm4
        vsubps  ymm3, ymm3, ymm2
        vaddps  ymm1, ymm1, ymm2
        vblendps        ymm1, ymm3, ymm1, 102           # ymm1 = ymm3[0],ymm1[1,2],ymm3[3,4],ymm1[5,6],ymm3[7]
        vpermpd ymm2, ymm0, 27                  # ymm2 = ymm0[3,2,1,0]
        vmulps  ymm2, ymm2, dword ptr [rsi + 24]{1to8}
        vsubps  ymm3, ymm1, ymm2
        vaddps  ymm1, ymm1, ymm2
        vmulps  ymm0, ymm0, dword ptr [rsi + 28]{1to8}
        movabs  rcx, offset .LCPI0_1
        vmovaps ymm2, ymmword ptr [rcx]
        vpermps ymm0, ymm2, ymm0
        vsubps  ymm2, ymm3, ymm0
        vaddps  ymm0, ymm1, ymm0
        vblendps        ymm0, ymm2, ymm0, 204           # ymm0 = ymm2[0,1],ymm0[2,3],ymm2[4,5],ymm0[6,7]
        vmovups ymmword ptr [rdi], ymm0
        pop     rbp
        vzeroupper
        ret
.Lfunc_end0:
        .size   julia_mul_2408, .Lfunc_end0-julia_mul_2408
                                        # -- End function
        .section        ".note.GNU-stack","",@progbits

Grade filtering does increase runtime slightly for CliffordNumber, but it’s not terrible:

julia> @benchmark $a ∧ $a
BenchmarkTools.Trial: 10000 samples with 1000 evaluations.
 Range (min … max):  6.706 ns … 96.302 ns  ┊ GC (min … max): 0.00% … 0.00%
 Time  (median):     7.232 ns              ┊ GC (median):    0.00%
 Time  (mean ± σ):   7.920 ns ±  4.646 ns  ┊ GC (mean ± σ):  0.00% ± 0.00%

  █▄                                                          
  ██▄▂▂▁▁▁▁▂▂▂▂▂▁▂▂▂▂▁▂▂▂▂▂▂▂▂▂▁▁▁▂▂▂▁▁▁▁▁▁▂▁▁▁▁▁▁▁▁▁▁▁▁▂▂▂▂ ▂
  6.71 ns        Histogram: frequency by time        41.9 ns <

 Memory estimate: 0 bytes, allocs estimate: 0.

The best part is that all of the types provided by this package are of fixed size and can be stored inline in another struct or array. This is part of my motivation for writing this package: I’d like to work with arrays of multivectors in the most performant manner possible.

Future goals

There are a few details I want to work out before registering this package, though that will be happening soon™️. Primarily, I want to take care of as many correctness issues and documentation gaps as I can and increase test coverage to handle more cases (especially with large, complicated algebras). Also, some functions, such as dual and undual, are not working or providing correct results at the time of writing (2024-05-09). I’ll update this section once the package is registered.

My long-term goal with this package is to build a fast implementation of the Clifford-Fourier transform: the generalization of the Fourier transforms to all Clifford algebras. This is still a long way away, but I’d like to leverage it for multidimensional problems in wave mechanics and quantum mechanics.

Outstanding issues

If you want to contribute to this library, here are some ideas of where to start that will significantly help with this package’s utility.

  • Numerical stability: I will admit, this is something I don’t know very much about, but I do want to keep a close eye on the various operations provided by the package and how errors propagate.
  • Interactive usability: This is a little difficult with any data type that isn’t KVector. This is because of the way coefficients are ordered in the data types: the blade associated with a Tuple index is determined by interpreting the index (minus 1) as a binary vector describing which basis 1-vectors construct this basis blade. This means that you have to explicitly specify all coefficients (including nonzero coefficients) when calling a constructor. Perhaps this is where macros can be used to provide nicer syntax.
  • Inverses and division: I don’t have any way of handling inverses for objects that aren’t versors. I’d like to have a working / operator for cases not involving scalars.
  • Performance symmetry of CliffordNumbers.mul: The way I’ve defined the multiplication kernel may create significant performance differences depending on the order of multiplication if the arguments are of different lengths.
  • Faster and more accurate sandwich products: Perhaps simply implementing them as a' * x * a is good enough, but since this is an extremely common pattern, it might be good to optimize this particular operation further.
  • Operations in a non-orthonormal basis: The package assumes an orthonormal basis, but for various reasons I’d like to make this possible.
  • Interoperability with StaticArrays and LinearAlgebra: CliffordNumbers.jl provides CliffordNumbers.similar_type, which constructs types, and CliffordNumbers.dot, which calculates the dot product of multivectors. I don’t want to take dependencies on StaticArrays.jl or LinearAlgebra.jl, but the functions defined in CliffordNumbers.jl will conflict with those of the other two packages, even though the semantics are compatible.
  • Products of AbstractCliffordNumber{Q,Bool} instances: Some mathematical objects, such as the Fano plane, can be modeled in terms of a Clifford algebra over Z/2Z, all of whose elements fit into CliffordNumber{PGA(2),Bool}. However, CliffordNumbers.mul uses Int8 instances to track signs in some cases, meaning that the Bool elements get promoted to Int8, and there may be negative signs present that should not be there. I don’t think this is reasonable behavior.
  • Exponentiation: I leverage the benefits of Base.sinpi and Base.cospi in the package’s exppi and exptau functions for negative-squaring blades, but for positive-squaring blades, I wonder if it’s possible to also have this extra accuracy.
  • Behavior of AbstractCliffordNumber{Q,<:Complex} instances with grade automorphisms: I don’t use complex Clifford algebras in my work, so I cannot guarantee any of this behavior is correct, particularly regarding the adjoint (') operator, which does not perform complex conjugation, but the multivector reverse!

Aside from internal package issues, I’m interested in how this package interoperates with other numeric representations besides Integer and AbstractFloat subtypes (posits, for instance), how it interoperates with packages that provide units, and how it interoperates with automatic differentiation tools.

I have been writing this code entirely on my own so I haven’t leveraged GitHub’s issue tracking yet, but expect to see these and other issues up in the future.

23 Likes

This looks really cool. I’ve been reading about GA lately and have looked for a good Julia package that implements it well. I also appreciate the lack of additional upstream dependencies since this is something I could see using in my own packages.

Looking over the docs, and without being a GA expert, it isn’t super clear to me how to actually construct Clifford numbers and k-vectors. I naively expected that I could do something like KVector{1}(1, 2, 3) or KVector{1}([1, 2, 3]) to construct the value 1e₁ + 2e₂ + 3e₃, but no obvious variation on this seems to work. The only real example I’m seeing in the docs is here, but only for a CliffordNumber{VGA(3)} and the correspondence of the input arguments to their resultant coefficients is a bit puzzling to me.

1 Like

I’m super excited to see awesome progress on this! The design looks very well thought-out.

@mike.ingold This could be clearer in the docs, but the key is that the type takes a quadratic form parameter:

julia> a = KVector{1, VGA(3)}(1, 2, 3)
3-element KVector{1, VGA(3), Int64}:
1e₁ + 2e₂ + 3e₃

I’m interested in the design to make CliffordNumber <: Number, because of course multivectors are both number-like and array-like (I chose the opposite in my GA package). This leads to funny semantics like length(a) == 3, but length(collect(a)) == 1, and you can get self-contradicting errors like “BoundsError: attempt to access 3-element KVector at index [2]”. I would suggest making CliffordNumbers completely scalar-like, even though that means the length is always one, and providing a clear way to obtain the array like components (something like components(a::CliffordNumber) = a.data maybe).

Also, is there an operation for grade projection, apart from using a constructor (like KVector{k}(a) for \langle a\rangle_k)?

BTW, I noticed that 1/a::KVector computes a component-wise reciprocal k-vector, not the multivector inverse which one would expect so that a*(1/a) == 1.

7 Likes

Thanks for the feedback! I’ve primarily focused on performance over usability at the moment, but I’m definitely planning on including a KVector{K,Q}(::AbstractArray{T,K}) method since I figure this will be a pretty common workflow for people: this includes turning vectors into KVector{1,Q,T} instances, but perhaps also turning antisymmetric matrices into KVector{2,Q,T} instances, etc.

I think I should also include some fallback methods for cases where valid types are being used as constructors but for some reason or another they shouldn’t be used to construct objects. Either way, the documentation definitely needs some work as well.

Thanks for pointing this out. I think my initial goal was to treat AbstractCliffordNumber like String in that it’s treated as a scalar for the purposes of broadcasting, but I honestly haven’t thought about it in a while.

The BitIndex{Q} type serves as the indexing type for AbstractCliffordNumber instead of normal numeric indexing, and static representations can be turned to Tuple instances with Tuple(::AbstractCliffordNumber).

I’ve included a blade_count function which may serve this purpose better than Base.length could. In general, I’ve had to double up on some common functions to get the desired behavior: for instance, I use scalar_type instad of Base.eltype (because Base.eltype(::Type{T}) where T<:Number = T).

The intent is to use constructors for grade projection so that you can use the smallest types available, but I can see some utility in retaining the same type when performing a grade projection (for instance, you may want to project an arbitrary set of grades of a CliffordNumber, which would generate another CliffordNumber).

I should also add: conversion does check if the result is fully representable by the target type, and will throw an error if this is not the case.

Great catch! I’ll fix this up (and encourage you to file more issues on the repo if you see any other problems - I’ll be doing that myself as I come across them)

1 Like

Oh… Julia is getting more and more suitable for computer graphics and game devs. Perhaps one day we’ll have a breakthrough?

1 Like

Any feeling on how soon we might expect to see the package registered in General?

I’ve got a particular application in mind where GA provides a really elegant solution; I can generalize a solution as simply abs(foldl(wedge, vectors)) instead of having to conditionally hard-code cross product, vector triple product, etc.

1 Like

Sometime this week! I’ll bump the thread and ping you once it’s available.

2 Likes

This looks really awesome! gonna bookmark this for future work. Thank you!

1 Like

I’ve seen you posting about your development on the Bivector discord, it’s fantastic to see this announcement!

2 Likes

I just submitted the package for registration, so expect it to be available in the General registry in 3 days if all goes well!

1 Like

Update: the package is now available from the General registry.

6 Likes

Just want to bump this thread again to note that as of the 0.1.4 release, there are some serious changes due to bugs I found (mostly in handling non-positive-definite signatures).

Additionally, there are new features of interest, like support for the left and right complements and regressive product, support for Base.float and Base.big, and Base.literal_pow implementation to allow for narrower types to be inferred for some cases.

In the future (for the 0.2.0 release), here are some of the breaking changes I anticipate:

  • Dropping Julia 1.8 support so that package extensions can be used without having to handle a significant amount of backwards compatibility.
  • BitIndex will be renamed to BladeIndex for clarity.
  • grade_involution may be renamed to main_involution.
1 Like