I want to get a random matrix of binary numbers (the GF(2) field, i.e. {0,1}), then check whether it is invertible by calculating its determinant and getting its inverse.
From stylistic (and elegance of types) point of view, I was hoping that the following would work:
using LinearAlgebra
bin_matrix = rand(Bool, 4, 4)
determinant = det(bin_matrix)
if determinant != false
inverted = inv(bin_matrix)
end
I expected determinant to be Bool, but it was Float. I also expected inverted to contain only {true, false}, but it actually contained {0.0, 1.0, -1.0}.
I know the algorithm for the correct determinant and inversion functions in principle, this is not my question.
Rather, I was wondering from the point of view of “Julia’s philosophy”, why are Bool matrices forced to behave as Float matrices?
It is especially surprising for me, because true+false and true*false in Julia do behave exactly like I would have expected (and I will be using them in my implementation of det), but it seems they are not the operations actually used in the computation of the LinearAlgebra.det? Isn’t the whole point of the elegant multiple dispatch paradigm that functions like LinearAlgebra.det would actually use the special + and * methods that Bool provides?
Julia’s LinearAlgebra library doesn’t have built-in functions for exact determinants of integer matrices, which is more appropriate for a number-theory package. Instead, its det function works by first computing the LU factorization (i.e. doing Gaussian elimination), which requires conversion to floating point (and is accurate up to roundoff errors).
Thanks! I understand the constraint that different fields have very different efficient algorithms and that the standard library should not necessarily include such special case algorithms.
I still have two questions:
Am I correct in assuming that an “ideologically pure” implementation of det would look like this:
It would have a general purpose method that works on AbstractMatrix{Any} by using the * and + operators provided by the field;
It would have special fast implementation for AbstractMatrix{Float} that uses LU factorization;
It would have special fast implementation for AbstractMatrix{Bool} that uses something from a number theory package;
etcetera
My second question is: Is there a “blessed by the community” package for number theory in Julia?
Most Julia programmers try to implement generic algorithms when possible, then provide specialized methods when applicable.
I would not call this an ideology, it’s just a practical approach.
Note that an implementation using * and + is tricky because, as discussed in the other thread, overflow becomes an issue. Int and friends are not \mathbb{Z} but a subset, so it is (technically) not a field.
Actually, the overflow is exactly what I desire. For instance, working with the finite field GF(n) (i.e. Z_n with some extra structure) requires overflow after the n-th element.
I would not call this overflow, it’s just the property of the operator for that field.
The term overflow is usually used for situations where the “correct” value (according to the definitions people usually use for the field that is approximated) is outside what you can represent.
If you define the + and * operators for your (finite) field, and they don’t overflow, there is no reason a generic det should not work. If you package the code for this, just include a method for LinearAlgebra.det and you should be fine.
It just so happens that for the usual sets people work with (in particular, floats), overflow and precision loss is a major issue, so linear algebra code includes specialized methods for this (and nearly everything else — very few calculations will match the formulas usually presented in numerically-naive textbooks).
Nemo.jl contains what you need. Outside of julia, sagemath has pretty good finite field BLAS; I never got around to benchmarking the BLAS performance of nemo.
The issue is that it is extremely difficult to implement an efficient det algorithm that does not use division. With division (e.g. via Gaussian elimination), you can do it in O(nÂł) operations. The naive combinatorial determinant formula is O(n!), which is worse than exponential. There are apparently division-free algorithms that require O(n^3.2) operations and similar, but they seem to be of the theoretical genre that are mainly intended for complexity bounds rather than practical implementation (i.e., their constant factors are probably huge, and their implementation looks like it requires a small computer-algebra system).
In consequence, the det function in the LinearAlgebra standard library implements only a simple and efficient algorithm that requires division.
Fascinating! I was only aware of O(n^4) algorithms. These look implementable, but would of course have horrible cancellation properties for anything non-exact.