# Feature or Bug? - Divide by zeros: 1/0 vs /

Before I open an Issue, I would like to discuss this question.

The core of the question is the following difference:

julia> 1 / 0
Inf
julia>  / 
1×1 Array{Float64,2}:
0.0


In my method this lead to a problem as discovered here, because I use the left division with a non-square matrix, which can be full of zeros in some special situation.

I supposed that as a tends to zero /[a] tends to infinity

julia>  / [0.01]
1×1 Array{Float64,2}:
100.0
julia>  / [0.001]
1×1 Array{Float64,2}:
1000.0000000000001


So, I supposed that the limit case will provide Inf if a is 0.0.

Furthermore, if squared matrix is used in the division:

julia> [1 0;0 1]/[1 0; 0 0]
ERROR: SingularException(2)


it leads to an error, and I think  is squared matrix.

I accept (but I am not happy) that pinv provides zeros for non-square matrices:

julia> [1 0;0 1; 0 0]/[1 0; 0 0; 0 0]
3×3 Array{Float64,2}:
1.0 0.0 0.0
0.0 0.0 0.0
0.0 0.0 0.0


pinv fullfils all the definition and in the help it is clarifies:

For matrices M with floating point elements, it is convenient to compute the pseudoinverse by inverting only singular values greater than rtol * maximum(svdvals(M)).

I would prefer Inf (or error) if the number of “non-singular values” is smaller then the smaller size of the matix (e.g.: if det(A * A')=0 ).

Note, that the same is problem can be found for left and right division (/,\). The help shows, that

help?>  \
...
\(A, B)
Matrix division using a polyalgorithm. For input matrices A and B, the result X is such that A*X == B when A is square.


but

julia> \
0.0


I think you mean square matrix, but it isn’t. It’s a vector. For square matrices,

julia> ones(1, 1) / zeros(1, 1)
ERROR: LinearAlgebra.SingularException(1)
Stacktrace:
 ldiv!(::LinearAlgebra.Diagonal{Float64,Array{Float64,1}}, ::Array{Float64,2}) at /home/tamas/src/julia-git/usr/share/julia/stdlib/v1.2/LinearAlgebra/src/diagonal.jl:482
 /(::Array{Float64,2}, ::Array{Float64,2}) at /home/tamas/src/julia-git/usr/share/julia/stdlib/v1.2/LinearAlgebra/src/generic.jl:978
 top-level scope at REPL:1

2 Likes

It’s a feature, and it’s why I hate assigning linear algebra meaning to cases that could be ambiguous with element wise operation so much.

4 Likes

And for previous discussion/complains of related issues:

Thank you for the fast reply. Now, I see.
I have realized that
pinv(A)
is not the same as
inv(a'*a)*a'
for matrices A if rank(A) < minimum(size(A)) .
In my code now I use the latter one, which behaves as I needed (and it seems to be faster in my case).

1 Like

Isn’t that still a bug, since neither of ( / ) *  and  * ( \ ) are ?
Seems to contradict what the docs promise, though they don’t say anything explicit about vectors.

I am not sure I understand what the issue is. The Moore-Penrose inverse of  is correctly calculated to be [0.0], the rest follows from that.

Is there a notion of a Moore-Penrose pseudo-inverse for vectors (i.e. elements of \mathbb{R}^n, as opposed to matrices of \mathcal{M}_{n,1}(\mathbb{R}))? I would be interested to see any reference explaining how such a thing would be defined.

Even if so, it could be argued that it is inconsistent for the / and \ operators to compute (Moore-Penrose) pseudo-inverses for vectors, but fail with a SingularException for singular matrices.

I agree. I actually used to like this tendency to give many array operators a linear algebra meaning, but a few recent discussions gradually led me to like it less and less.

Not that I am aware of. Here Julia is treating that vector as a single-column matrix, consistently with some other operations.

But as you noted yourself earlier in this thread, if  were to be treated as a matrix, it would be of size 1\times 1 (therefore square), and the very same operation would fail with a SingularException.

Everything can be considered consistent if  is considered to be a non-square matrix:

• matrix -> inversion has a linear algebra meaning
• non-square -> no real inverse can possibly be defined, hence defaulting to a pseudo-inverse

I can understand why vectors could be considered to be non-square matrices in general (i.e. when their size is not 1), and I don’t actually like it very much anymore, but this can be debatable (and has been debated, at length). But in any case, I think that treating a vector of size 1 as a non-square matrix leads to results that are very difficult to interpret. Wouldn’t you agree?

1 Like

See the discussion of

for various angles.

Personally, I think that \ and / for anything but division (ie the inverse of *) is a bad pun, and would prefer if pinv and least squares happened only when the user explicitly asks for them.

4 Likes

The docs of \ and / do not mention the Moore-Penrose inverse, but instead say that A\B is an X such A*X=B, which is not generally the same as pinv(A)*B. So at the very least it’s a documentation issue.
(I also agree that

)