What is the correct order to use factorization matrices in multiplication?

I tried to have the result of `` but it gave error. So, I tried adj_factor.L*adj_factor.U*adj_factor.F*V but it gave different result with adj*V.
Could you please tell me what is the correct order of L,U,P to have the same result between?

using SparseArrays, LinearAlgebra, KLU

adj = sparse([1.0 0 0 0 0; 0 1 0 -1 0; 0 -1 1 0 0; 1 0 0 0 -1; 1 -1 0 0 0]);
V = [4,5,6,7,8];
adj_factor = klu(adj);

julia> adj*V
5-element Vector{Float64}:

julia> adj_factor*V
ERROR: MethodError: no method matching *(::KLU.KLUFactorization{Float64, Int64}, ::Vector{Int64})

julia> adj_factor.L*adj_factor.U*adj_factor.F*V
5-element Vector{Float64}:

What operation are you trying to perform here?

You normally use LU factors to solve systems, i.e. you do x = adj_factor \ V to find the solution x of adj * x = V.

If it helps, the following holds

(adj_factor.L * adj_factor.U + adj_factor.F)==diagm(adj_factor.Rs)\adj[adj_factor.p,adj_factor.q] # true

Edit: as stevengj mentions, normally you decompose to solve a system, not to multiply the matrix with something (in that case you could directly use your matrix)

My main goal is to find a way to accelerate the multiplication of two matrices (one or both are sparse/s). So, I though that factorizing one and multiplying it by the other can be faster.

That will be significantly slower.

Ok, is there a package in Julia for fast multiplication?

Julia’s built in sparse matrix multiplication should already be pretty good. Is there a specific reason you think faster should be possible?

I am profiling my code (takes 5 sec) against another program (coded in Fortran and takes 2.13 sec). It is a time loop simulation 1:25e-6:0.6.
I found my code spends 1.5 sec at the line of multiplication (as in my first post). So, I started to think if there is a faster way to do the multiplication. I tried to use “MKLSparse.jl” but no gain. I am not sure if it is activated correctly in my julia.

julia> BLAS.lbt_get_config()
└ [ILP64] mkl_rt.2.dll

How long is the Fortran spending in each part? The Fortran is likely using the same libraries as Julia for the multiplication, so I would think the time difference would be in a different part.

1 Like

I cannot access the source code in Fortran. I can only see the execution time.

Other than the matrix multiplication, what are the other 3.5 seconds spent on in Julia?

The remaining time is for solution x =A\b, multiplication (other simple matrices), addition, and storing.

I would suggest to describe the problem in detail. It is possible to find examples of poor performance when multiplying matrices, especially when mixing dense and sparse matrices.


Thanks for your tip. Actually, I just want to find a way to accelerate the multiplication. Like new libraries or method.