I’m not sure if/where that is explained in the documentation. It’s a side effect of calling the underlying LAPACK functions. When A is rectangular, \ performs a QR factorization. If A has more columns than rows, you receive the minimum-norm solution (in the Euclidean norm). If A has more rows than columns and has full column rank, you receive the least-squares solution. If A has more rows than columns but is column-rank deficient, you receive the minimum-norm least-squares solution. That is often the desired behavior in numerical computations.
However, that behavior goes away when you work with sparse matrices, which is an unfortunate inconsistency:
julia> using SparseArrays
julia> A = [1. -3 1; 1 1 2]; u = [1, 14]; # A must have real elements
julia> B = sparse(A);
julia> B \ u
3-element Array{Float64,1}:
10.749999999999998
3.2499999999999996
0.0
In this case, you receive a basic solution, i.e., one with as many zeros as the column-rank deficiency of A. It’s not the minimum-norm solution.
@dpo thanks for the pointer to LAPACK. The main thing I am taking away from this is that what I am seeing is “expected behavior” and the details are best left as an exercise. I am ok with that.