Can an overloaded vector operator be broadcast?

I’m porting some projective geometric algebra (PGA) applications from ganja.js (JavaScript) to Julia+Makie. In the applications I’m porting (and I suspect in many other PGA applications) it would be convenient to be able to broadcast the overloaded vector operators to each column of a matrix (e.g., to rotate multiple polygon vertices, each stored in its own column, by multiplying the whole matrix by a PGA rotor vector).

However, when I try to do that, the operation reverts back to an element by element operation instead of the column by column overloaded operation I’m seeking. Any suggestions on how to specify to Julia to perform the column by column overloaded vector operations?

julia> include("ripga2d.jl")
utest (generic function with 4 methods)

julia> X = rand(Float32,8,3)
8×3 Matrix{Float32}:
 0.116556  0.36722    0.517178
 0.659318  0.469833   0.464139
 0.4231    0.0838702  0.530423
 0.213568  0.119756   0.326653
 0.71594   0.199458   0.411434
 0.944089  0.799287   0.350504
 0.935295  0.461366   0.207585
 0.104116  0.171554   0.507419

julia> Y = ones(Float32,8);

julia> X[:,1] * Y # correct geometric product
8-element Vector{Float32}:
 -0.18207103
 -0.7012192
  1.261383
 -0.18207103
  1.3951917
  1.3614659
  1.2613833
  4.1119814

julia> X .* Y # no longer a geometric product
8×3 Matrix{Float32}:
 0.116556  0.36722    0.517178
 0.659318  0.469833   0.464139
 0.4231    0.0838702  0.530423
 0.213568  0.119756   0.326653
 0.71594   0.199458   0.411434
 0.944089  0.799287   0.350504
 0.935295  0.461366   0.207585
 0.104116  0.171554   0.507419

julia>

Just in case it is relevant, I’ve implemented the following overloaded geometric product operator as suggested in the bivector.net C++ reference implementation of PGA. (Although the code may look dreadful, it is a straightforward and fast implementation of the geometric product’s Caley Table, which is trivial to derive.)

function Base.:*(a::Vector{Float32},b::Vector{Float32})::Vector{Float32}
	res = Vector{Float32}(undef,length(a))
	res[1]=a[1]*b[1]+a[3]*b[3]+a[4]*b[4]-a[7]*b[7]
	res[2]=a[1]*b[2]+a[2]*b[1]+a[5]*b[3]-a[3]*b[5]+a[4]*b[6]-a[6]*b[4]-a[7]*b[8]-a[8]*b[7]
	res[3]=a[1]*b[3]+a[3]*b[1]-a[4]*b[7]+a[7]*b[4]
	res[4]=a[1]*b[4]+a[4]*b[1]+a[3]*b[7]-a[7]*b[3]
	res[5]=a[1]*b[5]+a[5]*b[1]+a[2]*b[3]-a[3]*b[2]+a[4]*b[8]+a[8]*b[4]+a[6]*b[7]-a[7]*b[6]
	res[6]=a[1]*b[6]+a[6]*b[1]-a[2]*b[4]+a[4]*b[2]+a[3]*b[8]+a[8]*b[3]-a[5]*b[7]+a[7]*b[5]
	res[7]=a[1]*b[7]+a[7]*b[1]+a[3]*b[4]-a[4]*b[3]
	res[8]=a[1]*b[8]+a[8]*b[1]+a[2]*b[7]+a[7]*b[2]+a[3]*b[6]+a[6]*b[3]+a[4]*b[5]+a[5]*b[4]
	return res
end

This is some rather egregious type piracy. I would urge you to use some other symbol to denote the geometric product.

When you write

you are telling the compiler to operate elementwise, leading to a sequence of Float32 * Float32 operations (which you have fortunately not overloaded). You can only get out of this by piling more type piracy on top of the other, or by calling something else, like maybe

eachcol(X) .* Ref(Y) 

but then you would anyway have to pirate * for AbstractVector instead of just Vector.

Don’t do that. Come up with a different, available, symbol, and define the operations that you need on that.

Alternatively, you can define your own vector types that PGA applies to. That seems reasonable, since your geometric product appears to require vectors of length 8, while Vector can have any length. It seems like the length requirement should be encoded in the type itself, like in StaticArrays.jl.

Subtyping something from StaticArrays.jl, like FieldVector (not sure about name) might also yield a significant performance boost.

5 Likes

I believe the OP is already aware, but for any other passers-by interested in implementations which follow @DNF’s suggestions, there’s jollywatt/GeometricAlgebra.jl (shameless self promotion) as well as serenity4/GeometricAlgebra.jl/ and Grassmann.jl. Also, velexi-research/GeometricAlgebra.jl deserves mention, though it uses a multiplicative rather than additive representation for multivectors.

1 Like

I should have mentioned in my posted question that ripga2d.jl is a reference implementation. The filename ripga2d.jl is an acronym for Reference Implementation of Projective Geometric Algebra for 2 Dimensional problem space. I’m very grateful to Steven De Keninck for posting at bivector.net PGA reference implementations for C++, C#, Python, and Rust. However, he hasn’t yet posted a PGA reference implementation for Julia. So I w.rote ripga2d.jl and ripga3d.jl.

As a reference implementation, ripga2d.jl is intended to be part of an education environment. The type piracy issue was discussed a couple months ago here, and the operator symbol issue was discussed a couple months ago here, but I think those discussions can be summarized at a high level to be about setting good constraints. And I think good constraints in an education environment are very different from good constraints in a software production environment.

In practice, I have been using ripga2d.jl or ripga3d.jl with GLMakie for a couple months. Although GLMakie brings with it a lot of dependencies, I have not yet noticed a single problem due to type piracy. Concerning the choice of operator symbols, I’m using the same operator symbols that Steven De Keninck uses in his extensive (as far as I know, the world’s largest) collection of geometric algebra example applications. There already are necessary discrepancies between the operator symbols used in those example applications and the operator symbols used in math equations (e.g., in math equations, the geometric product symbol is white space, which gives uncluttered equations but is a parsing problem in programs). In my opinion, in an education environment, the addition of yet another set of operator symbols would be an unnecessary mental friction.

In projective geometric algebra, the number of basis vectors (i.e., the length of the PGA vector) is 2^(n+1) where n is the dimension of the problem space. So the PGA vector length is 8 for 2D, 16 for 3D, 32 for 4D, … (I like to think of the +1 in that n+1 as being necessary to stay “above the fray” of the problem space’s special cases such as gimbal-lock.)

In my opinion (obviously), in an educational environment you should never demonstrate techniques as tremendously dangerous or bad-practice as type piracy except for the express purpose of teaching about the dangers of type piracy. You should be on your best behavior to demonstrate good coding practice or else don’t use code at all. Your piracy here hasn’t broken any code you’ve run into because vector-vector multiplication isn’t normally used. But a student will follow the example you’ve set in another context and get into trouble.

You should choose a different operator here and/or wrap your vectors in your own type. I’ll suggest the latter. After all, what you’re using aren’t vectors. You can’t concatenate PGA vectors into a “PGA matrix” (or so I assume) like normal vectors into a normal matrix. Vectors cannot be “multiplied” except for cross products in a certain numbers of dimensions. These aren’t cross products, nor does Julia use * for that product anyway.

I’ll propose you make types expressly suited to your setting and operations. PGAVector{T,D} for a D-dimensional PGA vector with eltype T, for example. Then you’re free to overload * on that type for your purposes. Take a look at Quaternions.jl for an example of a relatively well-done package implementing a new domain of math. Better yet, see if one of the above-linked packages already does what you need.

Why can’t PGA vectors be in a matrix? The reason I posted this question is that I am porting a polygon intersection test application from JavaScript to Julia. The vertices of the polygons are stored as PGA vectors in a matrix and I was wondering if Julia offers a concise notation for rotating (i.e., multiplying by another PGA vector) all the vertices of a polygon.

Concerning your intuition that vectors cannot be multiplied except for cross products, I hope you are seated when you venture into geometric algebra where there are geometric products, inner products, outer products, and regressive products. (Sandwich products are also common but are just a sequence of geometric products.)

As you can tell (and hence my “or so I assume”) I know next-to-nothing about geometric algebra. Does a PGA matrix have real semantics or are you just making a pun on a stacked collection of PGA vectors? In either case, you will definitely risk serious problems if you pirate Base.:* to do PGA operations for Array types (including Vector and Matrix) representing those. A student inspired by your lesson would follow your example and suffer that chaos.

What I do know is that geometric algebra is not linear algebra. For better or worse, Julia has decided that an Array is either a simple collection or a linear-algebraic object. The fact that we conflate even those two uses is something we occasionally have reason to regret and, were we still pre-v1.0, I would advocate to divide them. Giving Array a geometric-algebraic hat to also wear is really starting to push it towards the breaking point.


Piracy aside, the fact that you have restricted your method to Vector{Float32} is fragile and non-idiomatic. That operation is totally feasible on any AbstractVector{<:Real} of suitable length (which is something you never check for). On that note, you’re leaving a lot of performance on the table by not using StaticArrays.jl’s SVector for your storage (and I will still recommend you wrap that in your own type).

1 Like

It’s more than a pun. For example, in the port of the inverse kinematics example application, the end points of the robot’s links are stored as PGA vector columns in a matrix and the algorithm convergence requires going back and forth over those vectors (four cycles is enough to converge). Traversing back and forth through the matrix with a column index is more convenient than assigning a unique variable name to each PGA vector representing each end point.

ikv

I think most people using geometric algebra have an alternative perspective about conflation, although they usually call it unification instead of conflation. In projective geometric algebra, rotation and translation are the same thing (i.e., translation is rotation around an “ideal” point an infinite distance away). Force and torque are the same thing. Maxwell’s four equations can be unified into a single equation using geometric algebra. Pushing that theme to an extreme, I would not be surprised if some sort of geometric algebra operation turned out to be the fundamental calculation of each of the approximately 150,000 cortical columns in the neocortex of the human brain.

“Mathematics is an artifact of the human mind and how it represents spatial reality. The properties of math are direct evidence for the principles of operation of the mind and brain.” — Steven Lehar’s psycho-mathematical hypothesis in first of three part visual introduction to Clifford Algebra — 2014

For what it’s worth, the primary alternative to a matrix representation would be a vector (representing a list) of PGA vectors, not individually named PGA vectors.

4 Likes

I’ve made my protest. Here’s your answer:

Change the first two lines of your pirate method to

function Base.:*(a::AbstractVector{T},b::AbstractVector{T}) where T<:Real
	res = similar(a)

and then call

julia> eachcol(X) .* Ref(Y)

If using the above suggestion where X is a Vector of Vectors instead, just call

julia> X .* Ref(Y)
1 Like

Would you be interested in writing a stern warning message about the dangers of type piracy for me to include in the header comment of ripga2d.jl?

1 Like

And it’s objectively as objectionable now as it was back then, by an objectivist notion of objectivity, unless the objective is to corrupt the minds of the youth. :smiling_imp:

Why don’t you just define this * operator in your own modular namespace, and then provide a macro to signify expressions that will use your algebra’s custom operators?

julia> module RIPga2d
           export @RIP
           custom_ops = (:*, ) # all operators that are specific to this algebra
           macro RIP(ex) :( let $((Expr(:(=), esc(op), op) for op in custom_ops)...); $(esc(ex)) end ) end
           *(args...) = Base.:*(args...)
           *(a::AbstractVector{E1}, b::AbstractVector{E2}) where {E1<:Real,E2<:Real} = let res = similar(a, promote_type(E1,E2))
               length(a)==length(b)==8 || error("In this algebra, multiplication is only defined for vectors of length 8.")
               res[1]=a[1]*b[1]+a[3]*b[3]+a[4]*b[4]-a[7]*b[7]
               res[2]=a[1]*b[2]+a[2]*b[1]+a[5]*b[3]-a[3]*b[5]+a[4]*b[6]-a[6]*b[4]-a[7]*b[8]-a[8]*b[7]
               res[3]=a[1]*b[3]+a[3]*b[1]-a[4]*b[7]+a[7]*b[4]
               res[4]=a[1]*b[4]+a[4]*b[1]+a[3]*b[7]-a[7]*b[3]
               res[5]=a[1]*b[5]+a[5]*b[1]+a[2]*b[3]-a[3]*b[2]+a[4]*b[8]+a[8]*b[4]+a[6]*b[7]-a[7]*b[6]
               res[6]=a[1]*b[6]+a[6]*b[1]-a[2]*b[4]+a[4]*b[2]+a[3]*b[8]+a[8]*b[3]-a[5]*b[7]+a[7]*b[5]
               res[7]=a[1]*b[7]+a[7]*b[1]+a[3]*b[4]-a[4]*b[3]
               res[8]=a[1]*b[8]+a[8]*b[1]+a[2]*b[7]+a[7]*b[2]+a[3]*b[6]+a[6]*b[3]+a[4]*b[5]+a[5]*b[4]
               res
           end
           *(A::AbstractMatrix{<:Real}, b::AbstractVector{<:Real}) = let res=similar(A)
               size(A,1)==length(b)==8 || error("In this algebra, yadda yadda ya.")
               for (j,col) in enumerate(eachcol(A));  res[:,j] = col * b  end
               res
           end
       end
Main.RIPga2d

works like so:

julia> using .RIPga2d

julia> X = [ 0.116556  0.36722    0.517178
             0.659318  0.469833   0.464139
             0.4231    0.0838702  0.530423
             0.213568  0.119756   0.326653
             0.71594   0.199458   0.411434
             0.944089  0.799287   0.350504
             0.935295  0.461366   0.207585
             0.104116  0.171554   0.5074190 ];

julia> y = ones(8);

julia> @RIP X[:,1] * y
8-element Vector{Float64}:
 -0.18207099999999998
 -0.701218
  1.261383
 -0.18207099999999998
  1.3951919999999998
  1.361466
  1.2613830000000001
  4.111981999999999

julia> @RIP X * y
8×3 Matrix{Float64}:
 -0.182071   0.10948  1.16667
 -0.701218  -0.35981  0.123473
  1.26138    0.7927   0.928533
 -0.182071   0.10948  1.16667
  1.39519    1.58187  1.83932
  1.36147    1.33376  1.56419
  1.26138    0.7927   0.928533
  4.11198    2.67234  3.31533
3 Likes

The quick answer is because I didn’t know how, so thanks for sharing your example. ripga2d.jl already defines a string macro to convert projective geometric algebra expressions from “math syntax” (e.g., the geometric product operator symbol is \thinspace) to “programming syntax” (e.g., the geometric product operator symbol is ‘*’). However, I haven’t yet mastered the Julia Metaprogramming documentation enough to write a macro like your example.

Is that macro what the Julia community wants? Would adding complexity to solve a problem that doesn’t exist get Elon Musk’s approval??

Within the context of a macro call, it’s well-understood that the rules are different than for the base language; operators can mean different things, and code can be inserted, removed, or rearranged. Macros are basically perfect for signifying a “domain-specific language.”

A good example of this is Tullio.jl, which allows you to express matrix operations in Einstein notation (and generates optimized code to execute it).

For projective geometric algebra, which is arguably a pretty specific domain, decorating expressions with a macro to signify an area where the rules of math are a tad different is pretty accepted and expected.

3 Likes

Is that the guy who put touchscreen-controlled windscreen wipers in his cars?

1 Like

I’m currently using the broadcasted pirated operator as the solution to port from JavaScript to Julia a polygon intersection testing example application. However, I then plan to convert all my ported example applications to using your macro solution. Given that Discourse is used as an archive, I’ll check the long term macro solution even though I like the speed and concise code offered by the broadcasted pirated operator.

As an aside, in the last couple months of porting projective geometric algebra example applications from JavaScript to Julia+Makie, I’ve become convinced that within five years Julia will overtake JavaScript and become the leading language for geometric algebra because of its speed, because of Makie, because of its metaprogramming, because of its REPL, and because of its community. That timeline would be even faster if Julia offered some built in support for the vector vector multiplications fundamental to geometric algebra. Currently, that valuable vector vector multiplication operator space lays fallow in Julia.