Ok, thanks!
Is there a package to solve klu(A)x=b in parallel similar to MKLPardisoSolver?
You mean like Pardiso.jl or MUMPS.jl? Most parallel solvers will want to do their own matrix factorization for you; you wonât just hand them a factorization you got elsewhere. Also, usually the expensive part is computing the factorization itself, so thatâs the main target for parallelization.
Yes
Whatâs wrong with Pardiso.jl or MUMPS.jl?
When I tried to use Pardisco.jl
, it seems it does not accept the KLU factor. For example as below:
using Pardiso, SparseArrays, LinearAlgebra, KLU
ps = MKLPardisoSolver()
A_Mat = sprand(100, 100, 0.1);
I_Mat = rand(100, 2);
I_Mat = I_Mat[:, 1];
V_Mat = zeros(100, 2);
V_Mat = V_Mat[:, 1];
ulia> solve!(ps, V_Mat, A_Mat, I_Mat);
julia> factor = klu(A_Mat);
julia> solve!(ps, V_Mat, factor, I_Mat);
ERROR: MethodError: no method matching solve!(::MKLPardisoSolver, ::Vector{Float64}, ::KLU.KLUFactorization{Float64, Int64}, ::Vector{Float64})
Pardiso (Pardisco became abandonware after the 1970s) provides its own LU factorization; similarly for MUMPS. Why do you need to use KLU.jl?
I clicked on the link but it seems not relevant?
I have designed a circuit-based simulation tool in Julia which uses KLU for the solution (gives fast speed). So, I want to continue working at this point to parallelize the solution.
In order to effectively parallelize a direct solver, the factorization algorithm has to change. Thatâs why e.g. PARDISO and MUMPS have their own sparse factorization algorithms. You should try different algorithms to see which work well for you.
Note also that parallelizing the sparse solve is only going to work well if your matrices are huge. Unless your circuits are enormous, you should probably look elsewhere for ways to parallelize or optimize.
Thanks for your helpful comment.