Except that it appears to be faster for small matrices.
What you’re missing (if anything) is only that I’m not claiming it’s better or worse in terms of performance for small matrices , just that it’s interesting that Cramer’s rule is not a total loss numerically.
The actual speed of small matrix kernels depends so much on the microarchitectural detail of your CPU (ability to use SIMD, detail of data dependencies and processor pipelining etc etc) that I’d not hazard a guess one way or another about performance without looking rather closely and benchmarking.
Base.@pure function Grassmann.Cramer(N::Int)
I hate to always be that guy, but the
xy return value allocation is definitely not
pure and that can introduce issues into your program. Try to avoid ever marking functions as
@pure as that can introduce serious issues if you’re wrong—and generally should get marked automatically if you’re correct. (Notably, we also now immediately reject any bug reports that include
Grassmann.jl in the dependency list due to this abuse)
So, there is a strange thing happening where if you only run
using Grassmann the
Base.:\ method is slower than if you additionally paste the
Base.:\ method into the REPL after loading the package.
# only `using Grassmann` without pasting `Base.:\` code into REPL julia> @btime $A\$(v1+2v2+3v3+4v4+5v5) 181.196 ns (4 allocations: 160 bytes) # after pasting the `Base.:\` code into the REPL the performance is faster julia> @btime $A\$(v1+2v2+3v3+4v4+5v5) 72.708 ns (0 allocations: 0 bytes)
This is the reason why you get a slightly different result in your post.
It’s not clear to me why that might be the case, does anyone know anything about that?
Note that this algorithm is NOT Cramer’s rule. The symbols are analogous to Cramer determinants, but they are in the language of affine Grassmann algebra and it’s the original Grassmann algorithm. So it’s not exactly the same computation as Cramer’s rule, although analogous in terms of algebra.
Can you elaborate what the potential issues are? If you can’t provide any examples or justification, then I am not necessarily inclined to believe your claim. The main issue I have been told about is that the internal definition of
@pure might have breaking changes in the future, but that’s not an issue yet.
I am not aware of any bug reports for Grassmann.jl outside of my repositories (any references?).
This would be one of them as an example: https://github.com/JuliaLang/julia/issues/31185
If you don’t understand what the annotation means, then you should probably not be using it at all. I don’t really care what you believe—I implemented
@pure, so I’m generally rather familiar with how it can go wrong. I created it as a crutch for the compiler internally, and as we’ve improved the compiler, we’ve been attempting to remove the usage of it as generally it can only causes problems even when applied correctly.
Yes, I’m aware you implemented it, but you’ve not yet convinced me to not use it here. This
Grassmann program is my playground for experiments with mathematics and programming. It works fine for me, and I am always willing to consider changing its application in any use of
Base.@pure if it’s a mistake, but am not currently convinced it is a demonstrable issue here.
willing to consider changing its application in any use of
Base.@pureif it’s a mistake
You’re clearly not actually willing to, since it’s been pointed out before that the bug report mentioned above (https://github.com/JuliaLang/julia/issues/31185) demonstrates that you are using it incorrectly and that it causes issues.
That bug was fixed already, and the fix didn’t involve changing anything with
@pure as it was a bug with the type system in a conversion method.
I’m only guessing, but I think that the reason why "return value allocation is definitely not
pure" is that it depends on where heap memory is available, and this is is a global state.
f() =  f() === f() # returns false
f is not pure, since calling it with the same (empty) set of immutable arguments does not return the same result.
This doesn’t actually matter for my use case though. I don’t need the output to be
===. It only generates some symbols which get pasted into a function.
@pure annotation tells the compiler that you guarantee the output to be
===. If you don’t have or need that guarantee, then you should let compiler know by removing the annotation.
Alright then, I will remove that annotation in this particular case, but there many valid use cases of it in other places of my package.
Julia has gotten significantly better at constant propagation since you first wrote this, so I would be surprised if many of them actually give you a performance benefit anymore in any case.
At the risk of sounding like a breaking record, this is probably not true, mostly just because there isn’t really a way to use
@pure correctly, ever. It doesn’t have well defined semantics. At some point we need a version of it that’s well-defined. The point people are trying to make here is that it’s best to avoid
@pure because the compiler makes zero guarantees about what it means, so if you don’t 100% need it, you’re better off without it. Even if nothing breaks right now, future versions of Julia can and will break and crash your code. You might not care, but that’s the reason people try to get you to stop using it.
Yes, I’m well aware of all that. Having
@pure does actually improve my code in a lot of places, it’s not something I can automatically remove from everywhere. I agree it is worth looking into how much the constant propagation has improved and to check if some of the pure annotations can be removed, but it’s not currently a high priority.
That may be a relevant factor but I think a much bigger and harder to avoid problem is that the method table itself is a global mutable state. Hence any function which relies on the state of the method table is itself impure.
That means no generic, multiple dispatch functions should be called inside a
@pure function body, which is almost impossible in Julia.
This mutability in the method table may not effect you if you’re not hot-modifying it, however, it seems like it’s liable to cause weird, hard to reproduce bugs if other people were to try and build tools on top of Grassmann.jl, adding their own methods and overloading your functions (or if someone pirated the methods you use, or if you pirate someone else’s methods).
This seems especially worrisome since you do have a tenancy to advertise Grassmann.jl as something people should adapt their project to make use of, so if someone did act on that it could result in an unfortunate situation.
It is clearly stated in my repository that there is no warranty. Those who are serious about using Grassmann.jl should consider consulting with me.
Hum, too bad. I was really excited by your post, because I have to solve systems like
A\b in two occasions: interpolating orbit and inside the Kalman filter (which applies to the attitude and orbit estimation inside the satellite). This performance gain would be very handy. However, given all the comments, maybe I will need to wait until those concerns are solved.
Anyway, great work! It is very interesting.
@Ronis_BR, I don’t think you will encounter a major issue with the A\b method, would be interesting to see what you try out! Another interesting aspect would be using bivectors and quaternions for that also. As @Mason mentioned, what you probably don’t want to do is overload internal