I am implementing several Finite Element procedures in Julia and I am trying to switch all the implementation to use LinearSolve. I would like to know if it is possible to reuse a symbolic factorization of the coefficient matrix when the right-hand side is also changing, and I was hoping someone could help me out.
I have a fairly complex example in which I slice a large sparse matrix (using @view) and solve the LinearProblem for given right-hand side vectors. For simplicity, this is an equivalent MWE:
using LinearAlgebra using LinearSolve using SparseArrays A = sparse([1. 0 0 2 0 0; 0 3 1 0 0 0; 0 1 1 0 0 1; 2 0 0 5 0 0; 0 0 0 0 1 0; 0 0 1 0 0 2]) id_dof = BitVector([1, 0, 1, 1, 0, 1]) A_11 = @view A[id_dof, id_dof] b_1 = [3., 2., 6., 1.] problem = LinearProblem(A_11, b_1) solution = solve(problem)
If anyone is curious, we generally construct a large, sparse stiffness matrix (which is usually preconditioned), and then solve the problem over a sub-domain (the free nodes). Up to here, everything works great. After each iteration , however, we re-compute the right-hand side, update A (which does not change its sparsity pattern), and perform the next iteration / step.
Reading the LinearSolve Documentation and watching various tutorials, I could only find examples for re-setting either A or b using the cache of the previously solved step. I tried updating both in series, but obviously it doesn’t work:
A[1,1] = 12 # Update A b_1 = [6., 3. 1.5, 0.5] my_problem = LinearSolve.set_A(solution.cache, A_11) # It works up to here my_problem = LinearSolve.set_b(solution.cache, A_11) # As expected, this reverts A to the previous instance
Since the sparsity pattern of A does not change, I was wondering if it would be worthwhile (in terms of computational speed) to reuse such information in the next step or not, and if that would be possible using LinearSolve.