I want to get a random matrix of binary numbers (the GF(2) field, i.e. `{0,1}`

), then check whether it is invertible by calculating its determinant and getting its inverse.

From stylistic (and elegance of types) point of view, I was hoping that the following would work:

```
using LinearAlgebra
bin_matrix = rand(Bool, 4, 4)
determinant = det(bin_matrix)
if determinant != false
inverted = inv(bin_matrix)
end
```

I expected `determinant`

to be `Bool`

, but it was `Float`

. I also expected `inverted`

to contain only `{true, false}`

, but it actually contained `{0.0, 1.0, -1.0}`

.

I know the algorithm for the correct determinant and inversion functions in principle, this is not my question.

Rather, I was wondering from the point of view of â€śJuliaâ€™s philosophyâ€ť, why are `Bool`

matrices forced to behave as `Float`

matrices?

It is especially surprising for me, because `true+false`

and `true*false`

in Julia do behave exactly like I would have expected (and I will be using them in my implementation of `det`

), but it seems they are not the operations actually used in the computation of the `LinearAlgebra.det`

? Isnâ€™t the whole point of the elegant multiple dispatch paradigm that functions like `LinearAlgebra.det`

would actually use the special `+`

and `*`

methods that `Bool`

provides?