# Drop phase of Factor?

How can I reserve the symb. phase?

(Posted and answered here: https://github.com/JuliaSparse/KLU.jl/issues/6)

1 Like

Note that if you use LinearSolve.jl in its caching mode, it will cache the symbolic phase automatically.

http://linearsolve.sciml.ai/dev/tutorials/caching_interface/

1 Like

hanks for your note.
Can I use KLU in the caching mode of `LinearSolve.jl` ?
if so, could you please guide me on my previous MWE?

Just do exactly what it shows in the tutorial I linked.

http://linearsolve.sciml.ai/dev/tutorials/caching_interface/

``````using LinearSolve, SparseArrays

n = 100
A = sprand(n,n,0.1)
b1 = rand(n); b2 = rand(n)
prob = LinearProblem(A, b1)

linsolve = init(prob,KLUFactorization())

sol1 = solve(linsolve) # Does symbolic factorization, then numeric factorization, then backsolve
sol1.u

linsolve = LinearSolve.set_b(sol1.cache,b2)
sol2 = solve(linsolve) # only does backsolve
sol2.u

A.nzval .= rand(length(A.nzval)) # new sparse matrix, same sparsity pattern
linsolve = LinearSolve.set_A(sol2.cache,A)
sol3 = solve(linsolve) # new numerical factorization and backsolve, no new symbolic factorization
``````

Note that as described in the documentation:

http://linearsolve.sciml.ai/dev/solvers/solvers/#SuiteSparse.jl

`KLUFactorization(reuse_symbolic=true)` is the default, so changing `A` will reuse the symbolic factorization (thus assuming the sparsity pattern is the same), and one needs to do `KLUFactorization(;reuse_symbolic=false)` to make it symbolic refactorize with changes to `A`.

So the behavior you want is the default (for clear reasons).

1 Like

Run it with a different example. I updated it so it’s a sparse matrix which is (most likely) non-singular (though there’s a change any random sparse matrix is singular)

1 Like

Drop the `if t == 1` thing, I’m not sure what you’re trying to do with that. It’s just, if you make a new LinearProblem, i.e. everything is brand new, then it takes a new symbolic factorization (of course, it has to), numeric factorization, and backsolve. If you only change A, then it does a new numeric factorization and backsolve (if the sparsity pattern is the same). If you only change b, then it only does the backsolve. I.e., it only does the computations it has to do.

1 Like

Yes

1 Like

Thank you for the confirmation.

However, following this procedure if cashing interface, I didnt have the results as without cashing interface.

Post an MWE

@ChrisRackauckas Here it is:

``````function s1()
B = ones(10);
A = spzeros(10,10);
A[diagind(A)] .= 1;
t_start = 1;
t_end = 10;
global R_Record1 = zeros(length(B),size(t_start:t_end,1));

result = []
linsolve = []
A_old = deepcopy(A);
iter = 0;

for t in t_start:t_end
iter +=1;

B[t] = t;
if iseven(t)
A[t,t] = t;
else
A[end,t] = 10;
end

if A_old != A
if t == 1
prob = LinearProblem(A, B)
linsolve = init(prob,KLUFactorization())
result = solve(linsolve)
end

if iseven(t)
linsolve = LinearSolve.set_A(result.cache, A)
else
prob = LinearProblem(A, B)
linsolve = init(prob,KLUFactorization())
result = solve(linsolve)
end

A_old =  deepcopy(A);

end

linsolve = LinearSolve.set_b(result.cache, B)
result = solve(linsolve)
R_Record1[:,iter] =  result.u;

end # for t in 1:100
end # function s1()

function s2()
B = ones(10);
A = spzeros(10,10);
A[diagind(A)] .= 1;
t_start = 1;
t_end = 10;
global R_Record2 = zeros(length(B),size(t_start:t_end,1));

result = [];
linsolve = [];
iter = 0;

for t in t_start:t_end
iter +=1;

B[t] = t;
if iseven(t)
A[t,t] = t;
else
A[end,t] = 10;
end

prob = LinearProblem(A, B)
linsolve = init(prob,KLUFactorization())
#linsolve = LinearSolve.set_A(result.cache, A)
#linsolve = LinearSolve.set_b(result.cache, B)
result = solve(linsolve)
R_Record2[:,iter] =  result.u;

end # for t in 1:100
end # function s2()

s1()
s2()

julia> R_Record1 == R_Record2
false
``````

Let me take a look at this tonight. It’s possible there’s an error, or that I need to check the matrices more on input. The refactor capability was finicky at first.

1 Like

Thanks, any feedback for this issue? One more is that give the results are correct, using the `klu` and `klu!` rather than the cashe mode here was faster in my case, is it always true?

@Wimmerer Kindly, could you please help in checking the above MWE and here if possible:

Wait, you’re modifying the structure of `A` right? This invalidates the symbolic factorization of `A` for obvious reasons.

I am struggling to figure out how this is allowed, since I check that the index vectors are equal. I will ensure that the error is properly thrown in the next release of KLU. Oh nevermind I see what you’re doing now.

PS: Sorry I lost track of this, I don’t check Discourse as much as I should.

E2: Can you get me an example of this with just two matrices A? I don’t think that script is doing what is intended. If I do a much simpler loop where I compare `klu!(K, A) \ B == klu(A) \ B` after changing `A` I get correct results.

Thanks for your reply. Actually, I really need this kind of loop as it is in my work. Maybe the following comment on function `s1` can let you know directly my aim.
At each time-loop, I want to check the occurrence of one of two cases:
Case-1, that needs to refactor A (when ~iseven(t))
Case-2, that needs only to change the value of A (when iseven(t))

1 Like