This might be a naive question, and the title basically says it already: I am trying to use LinearSolve.jl but get an error for non-square matrices: Assume I have matrix A of dimension n\times p and vector b of dimension n, then I try to find x of dimension p such that Ax=b.

Indeed, calling A \ b works and immediately returns the correct x. But when I do:

using LinearSolve
lin_prob = LinearProblem(A, b)
solution = LinearSolve.solve(lin_prob)

I get the error

ERROR: DimensionMismatch: matrix is not square: ...

In the documentation of LinearSolve.jl, I also did not find examples of non-square matrices, yet I am surprised about that because it is advertised that LinearSolve.jl is supposed to replace the \ - Operator, which works fine in my example.

Hence, I am suspecting that I am simply not using LinearSolve correctly? Or is it really not possible to use it for non-square matrices, and if so, why is that, and are there plans to add this feature in the future in that case? Thanks a lot?

I think the issue may be that solve chooses a default algorithm that requires a square matrix? Try specifying an algorithm explicitly. Itâ€™s the second positional argument to solve.

Maybe make an issue regarding the default algorithm choice, if this helps.

A \ b does a little bit of magic for non-square matrices. For a â€śtallâ€ť matrix, it automatically switches to finding the least-square solution (by pivoted QR). For a â€śwideâ€ť matrix, it switches to finding the minimum-norm solution.

I think with LinearSolve.jl you may have to give an appropriate algorithm more explicitly in these cases?

Thanks for the replies. I tried to follow your advice and checked the list of solve-methods for least-square and least-norm approaches and found the methods KrylovJL_LSMR (for least squares) and KrylovJL_CRAIGMR (for least norm) and indeed the following code works:

using LinearSolve
lin_prob = LinearProblem(A, b)
x_least_squares = LinearSolve.solve(lin_prob,LinearSolve.KrylovJL_LSMR())
x_least_norm = LinearSolve.solve(lin_prob,LinearSolve.KrylovJL_CRAIGMR())

All other methods, that I tried returned errors, though I did not try all of them.

I did not find a method that does, what @stevengj described for A\b (first checking whether matrix is tall or wide and then acting accordingly).
Though one could of course do such things oneself, someone uninitiated might not know that this is the best approach and therefore I will open an issue, requesting to set something like this as the default for non-square matrices.

Certain Krylov methods for rectangular problems, including LSQR and LSMR, work on both over- and underdetermined systems. Whenever A does not have full column rank (which is true, in particular, when A is underdetermined), they return (an approximation of) the minimum-norm least-squares solution.

Above, you called CRAIGMR, which is another implementation of LSQR. If b is in the range of A, you can also call CRAIG.

See the documentation of Krylov.jl for more details.