Grassmann.jl A\b 3x faster than Julia's StaticArrays.jl

So, there is a strange thing happening where if you only run using Grassmann the Base.:\ method is slower than if you additionally paste the Base.:\ method into the REPL after loading the package.

# only `using Grassmann` without pasting `Base.:\` code into REPL
julia> @btime $A\$(v1+2v2+3v3+4v4+5v5)
  181.196 ns (4 allocations: 160 bytes)

# after pasting the `Base.:\` code into the REPL the performance is faster
julia> @btime $A\$(v1+2v2+3v3+4v4+5v5)
  72.708 ns (0 allocations: 0 bytes)

This is the reason why you get a slightly different result in your post.

It’s not clear to me why that might be the case, does anyone know anything about that?

Note that this algorithm is NOT Cramer’s rule. The symbols are analogous to Cramer determinants, but they are in the language of affine Grassmann algebra and it’s the original Grassmann algorithm. So it’s not exactly the same computation as Cramer’s rule, although analogous in terms of algebra.

Can you elaborate what the potential issues are? If you can’t provide any examples or justification, then I am not necessarily inclined to believe your claim. The main issue I have been told about is that the internal definition of @pure might have breaking changes in the future, but that’s not an issue yet.

I am not aware of any bug reports for Grassmann.jl outside of my repositories (any references?).

This would be one of them as an example:


If you don’t understand what the annotation means, then you should probably not be using it at all. I don’t really care what you believe—I implemented @pure, so I’m generally rather familiar with how it can go wrong. I created it as a crutch for the compiler internally, and as we’ve improved the compiler, we’ve been attempting to remove the usage of it as generally it can only causes problems even when applied correctly.


Yes, I’m aware you implemented it, but you’ve not yet convinced me to not use it here. This Grassmann program is my playground for experiments with mathematics and programming. It works fine for me, and I am always willing to consider changing its application in any use of Base.@pure if it’s a mistake, but am not currently convinced it is a demonstrable issue here.

willing to consider changing its application in any use of Base.@pure if it’s a mistake

You’re clearly not actually willing to, since it’s been pointed out before that the bug report mentioned above ( demonstrates that you are using it incorrectly and that it causes issues.

1 Like

That bug was fixed already, and the fix didn’t involve changing anything with @pure as it was a bug with the type system in a conversion method.

I’m only guessing, but I think that the reason why "return value allocation is definitely not pure" is that it depends on where heap memory is available, and this is is a global state.


f() = []
f() === f() # returns false

The function f is not pure, since calling it with the same (empty) set of immutable arguments does not return the same result.

1 Like

This doesn’t actually matter for my use case though. I don’t need the output to be ===. It only generates some symbols which get pasted into a function.

The @pure annotation tells the compiler that you guarantee the output to be ===. If you don’t have or need that guarantee, then you should let compiler know by removing the annotation.

Alright then, I will remove that annotation in this particular case, but there many valid use cases of it in other places of my package.

1 Like

Julia has gotten significantly better at constant propagation since you first wrote this, so I would be surprised if many of them actually give you a performance benefit anymore in any case.


At the risk of sounding like a breaking record, this is probably not true, mostly just because there isn’t really a way to use @pure correctly, ever. It doesn’t have well defined semantics. At some point we need a version of it that’s well-defined. The point people are trying to make here is that it’s best to avoid @pure because the compiler makes zero guarantees about what it means, so if you don’t 100% need it, you’re better off without it. Even if nothing breaks right now, future versions of Julia can and will break and crash your code. You might not care, but that’s the reason people try to get you to stop using it.


Yes, I’m well aware of all that. Having @pure does actually improve my code in a lot of places, it’s not something I can automatically remove from everywhere. I agree it is worth looking into how much the constant propagation has improved and to check if some of the pure annotations can be removed, but it’s not currently a high priority.

That may be a relevant factor but I think a much bigger and harder to avoid problem is that the method table itself is a global mutable state. Hence any function which relies on the state of the method table is itself impure.

That means no generic, multiple dispatch functions should be called inside a @pure function body, which is almost impossible in Julia.

This mutability in the method table may not effect you if you’re not hot-modifying it, however, it seems like it’s liable to cause weird, hard to reproduce bugs if other people were to try and build tools on top of Grassmann.jl, adding their own methods and overloading your functions (or if someone pirated the methods you use, or if you pirate someone else’s methods).

This seems especially worrisome since you do have a tenancy to advertise Grassmann.jl as something people should adapt their project to make use of, so if someone did act on that it could result in an unfortunate situation.


It is clearly stated in my repository that there is no warranty. Those who are serious about using Grassmann.jl should consider consulting with me.

Hum, too bad. I was really excited by your post, because I have to solve systems like A\b in two occasions: interpolating orbit and inside the Kalman filter (which applies to the attitude and orbit estimation inside the satellite). This performance gain would be very handy. However, given all the comments, maybe I will need to wait until those concerns are solved.

Anyway, great work! It is very interesting.

1 Like

@Ronis_BR, I don’t think you will encounter a major issue with the A\b method, would be interesting to see what you try out! Another interesting aspect would be using bivectors and quaternions for that also. As @Mason mentioned, what you probably don’t want to do is overload internal Grassmann methods.

1 Like

Could somebody please help me figure out what’s going on with this performance variance?

So what’s exactly going on here? Why is it faster when I paste it into the REPL?

This doesn’t happen with the or method, which is fast without pasting into the REPL

julia> @btime $(barycenter(A)) ∈ $A
  61.499 ns (0 allocations: 0 bytes)

So, for the linear solve method there is a performance gain from pasting into REPL but not for .

Can anyone help me figure out why the linear solve method changes in performance like this?

That sure seems like the sort of strangeness you might see from Base.@pure abuse; order of definition suddenly matters.


Whether or not @pure is used doesn’t affect this, you need to actually back up your claim with proof.