So, there is a strange thing happening where if you only run using Grassmann
the Base.:\
method is slower than if you additionally paste the Base.:\
method into the REPL after loading the package.
# only `using Grassmann` without pasting `Base.:\` code into REPL
julia> @btime $A\$(v1+2v2+3v3+4v4+5v5)
181.196 ns (4 allocations: 160 bytes)
# after pasting the `Base.:\` code into the REPL the performance is faster
julia> @btime $A\$(v1+2v2+3v3+4v4+5v5)
72.708 ns (0 allocations: 0 bytes)
This is the reason why you get a slightly different result in your post.
It’s not clear to me why that might be the case, does anyone know anything about that?
Note that this algorithm is NOT Cramer’s rule. The symbols are analogous to Cramer determinants, but they are in the language of affine Grassmann algebra and it’s the original Grassmann algorithm. So it’s not exactly the same computation as Cramer’s rule, although analogous in terms of algebra.
Can you elaborate what the potential issues are? If you can’t provide any examples or justification, then I am not necessarily inclined to believe your claim. The main issue I have been told about is that the internal definition of @pure
might have breaking changes in the future, but that’s not an issue yet.
I am not aware of any bug reports for Grassmann.jl outside of my repositories (any references?).