# What is the correct order to use factorization matrices in multiplication?

I tried to have the result of `` but it gave error. So, I tried `adj_factor.L*adj_factor.U*adj_factor.F*V` but it gave different result with `adj*V`.
Could you please tell me what is the correct order of `L,U,P` to have the same result between?

``````using SparseArrays, LinearAlgebra, KLU

adj = sparse([1.0 0 0 0 0; 0 1 0 -1 0; 0 -1 1 0 0; 1 0 0 0 -1; 1 -1 0 0 0]);
V = [4,5,6,7,8];

5-element Vector{Float64}:
4.0
-2.0
1.0
-4.0
-1.0

ERROR: MethodError: no method matching *(::KLU.KLUFactorization{Float64, Int64}, ::Vector{Int64})

5-element Vector{Float64}:
-8.0
-7.0
-7.0
-8.0
0.0
``````

What operation are you trying to perform here?

You normally use LU factors to solve systems, i.e. you do `x = adj_factor \ V` to find the solution `x` of `adj * x = V`.

If it helps, the following holds

``````(adj_factor.L * adj_factor.U + adj_factor.F)==diagm(adj_factor.Rs)\adj[adj_factor.p,adj_factor.q] # true
``````

Edit: as stevengj mentions, normally you decompose to solve a system, not to multiply the matrix with something (in that case you could directly use your matrix)

My main goal is to find a way to accelerate the multiplication of two matrices (one or both are sparse/s). So, I though that factorizing one and multiplying it by the other can be faster.

That will be significantly slower.

Ok, is there a package in Julia for fast multiplication?

Julia’s built in sparse matrix multiplication should already be pretty good. Is there a specific reason you think faster should be possible?

I am profiling my code (takes 5 sec) against another program (coded in Fortran and takes 2.13 sec). It is a time loop simulation 1:25e-6:0.6.
I found my code spends 1.5 sec at the line of multiplication (as in my first post). So, I started to think if there is a faster way to do the multiplication. I tried to use “MKLSparse.jl” but no gain. I am not sure if it is activated correctly in my julia.

``````julia> BLAS.lbt_get_config()
LinearAlgebra.BLAS.LBTConfig
Libraries:
└ [ILP64] mkl_rt.2.dll
``````

How long is the Fortran spending in each part? The Fortran is likely using the same libraries as Julia for the multiplication, so I would think the time difference would be in a different part.

1 Like

I cannot access the source code in Fortran. I can only see the execution time.

Other than the matrix multiplication, what are the other 3.5 seconds spent on in Julia?

The remaining time is for solution x =A\b, multiplication (other simple matrices), addition, and storing.

I would suggest to describe the problem in detail. It is possible to find examples of poor performance when multiplying matrices, especially when mixing dense and sparse matrices.

3 Likes

Thanks for your tip. Actually, I just want to find a way to accelerate the multiplication. Like new libraries or method.

For solving `Ax=b` by using KLU. I don not know how it works after this point (I could not isolate `x`):

``````A_factor = klu(A);
(A_factor.L * A_factor.U + A_factor.F)*x =diagm(A_factor.Rs)\b[A_factor.p,Aj_factor.q]
``````

So it may be using a completely different algorithm? Hard to compare with, then…

1 Like

Just do `x = A_factor \ b`. (This is the first example in the KLU.jl README.)

Factoring matrices loses sparsity, so it will slow down multiplication. The reason to factor matrices is to do other things, like solving Ax=b.

Before you can optimize your code, you’re really going to have to learn a bit about how sparse-matrix algorithms work, in order to get some sense of what is fast and what is slow.

1 Like

For you the LU factorization you should also generally do `F \ b` with `F = lu(A)` in Julia. This is mathematically equivalent to `U \ (L \ P*b)` but is easier and probably more efficient (because it uses specialized lower-level routines and tries to work more in-place with fewer temporary vectors).