Solving Ax=B for large matrix dimesnions efficiently in Julia

Hi all,

I have the linear set of equation Ax=b where A can be as large as 20000 x 20000 and is invertible. What is the best computationally efficient way of solving this problem in Julia?

I have looked into linear algebra package and it has been reported that one can use A\b to get the solution? Is it computationally an efficient way to solve this problem? What about inv(A)*b? Is there any specific package apart from linear algebra package which can handle this problem?

Looking forward for your feedbacks.


x = A \ b is the general formulation that should be efficient in most cases.

If you know that A is Hermitian and positive definite, then x = cholesky(A) \ b is probably better.

If you have a fast way of computing the matrix-vector product A*x, then have a look at IterativeSolvers.jl.

You should almost never use x = inv(A) * b. If you need to re-solve the problem for different right-hand sides, then it is better to compute a factorization of A than its inverse.


@Per Thanks for the reply. So, one naive question: when we use A\b, which algorithm/technique does julia use to solve it?

Yes, my equation is in a loop and I have to compute Ax=b couple of times for updated b until, the criteria is met. So, in such a case, it is better to use A\b rather than using the inv. Right?

That depends on what matrix you put in there \ is a polyalgorithm. For square matrices julia will first perform an LU decomposition A=LU and use this decomposition to solve the problem. For rectangular A the result is the minimum-norm least squares solution computed by a pivoted QR factorization.


The main point beeing. If you just want to solve a single linear system of equations, using \(A,b) should be fine in your case. If you have multiple right hand sides b, then it is a good idea to precompute the factorization Afact = factorize(A) and use this factorization to solve your problem \(Afact,b).

So, my problem is that once x is calculated (let’s say through A\b), both A and b are updated and then x is updated again based upon (A\b). The stopping criteria is when diff between x values in two consecutive iterations becomes lower than the specified threshold.

Since A and b are updated during each iteration, do you think I can still use factorize function?

Factorization is quite expensive to calculate and you would need to recalculate it in each iteration step. In this case an iterative solver as suggested by @Per would probably be faster. In general less accurate, but that is the trade off one has to make.

The only exception would be if your update to A analytically translates to an update of the factorization, which is usually extremely fast. An example would be if A = M'M + aI with identity matrix I, where changes of the scalar a have an effect on the singular values of a singular value decomposition only.

1 Like

You mean a dense matrix 20000x20000? Does it even fit in memory?

That’s about 3 gigabytes. Not that much by modern standards, but one should take care to avoid making unnecessary copies.

Where do these matrices come from? Often, large matrices in practice have some structure that you can exploit.


@stevengj the matrices are coming from power systems. I forgot to mention that matrix A is largely sparse. In fact, it is the jacobian matrix and has to be updated during each iteration of a while loop until a certain criteria meets.

Is it symmetric? Positive definite? Indefinite?

Then use the sparse-matrix data structure. A \ b will then use a sparse-direct solver and should be fast and memory-efficient (if A is sparse enough).

As mentioned above, you can save the factorization object if you are doing repeated solves with the same matrix, or use a more specialized factorization like cholesky(A) if your matrix has special properties like SPD.

It sounds like you might be solving a fixed-point equation A(x) \ b(x) == x? You might consider using Anderson acceleration, e.g. via the FixedPointAcceleration package or the NLsolve package.