Hello! I am using Julia to write a finite volume CFD code, which requires solving large (40,000x40,000) block sparse matrices. Currently, I am using the PETSc interface provided by PETSc.jl, as its GMRES routine is notably outperforming the IterativeSolvers.jl GMRES routine (at least with my implementation of it). However, there appears to be an issue with how I am constructing the sparse block matrices. Sometimes, but not always, I encounter errors when adjusting the values of my matrix and creating a petsc solver for it.
Trimmed error message (the full one is many many lines):
[0]PETSC ERROR: #5998 PetscCommDuplicate() at /workspace/srcdir/petsc-3.16.6/src/sys/objects/tagm.c:117
[0]PETSC ERROR: #5999 PetscHeaderCreate_Private() at /workspace/srcdir/petsc-3.16.6/src/sys/objects/inherit.c:63
[0]PETSC ERROR: #6000 KSPCreate() at /workspace/srcdir/petsc-3.16.6/src/ksp/ksp/interface/itcreate.c:697
ERROR: PETSc.PetscError(1)
Stacktrace:
[1] #174
@ C:\Users\caleb\.julia\packages\PETSc\LSdRz\src\ksp.jl:78 [inlined]
[2] with(f::PETSc.var"#174#183"{MPI.Comm, PETSc.KSP{Float64, PETSc.PetscLibType{Float64, Int64, String}}}, opts::PETSc.Options{PETSc.PetscLibType{Float64, Int64, String}})
@ PETSc C:\Users\caleb\.julia\packages\PETSc\LSdRz\src\options.jl:191
[3] ([0]PETSC ERROR: #6001 PetscCommDestroy() at /workspace/srcdir/petsc-3.16.6/src/sys/objects/tagm.c:193
[0]PETSC ERROR: #6002 PetscHeaderDestroy_Private() at /workspace/srcdir/petsc-3.16.6/src/sys/objects/inherit.c:121
[0]PETSC ERROR: #6003 MatDestroy() at /workspace/srcdir/petsc-3.16.6/src/mat/interface/matrix.c:1307
PETSc.KSP{Float64})(comm::MPI.Comm; kwargs::Base.Pairs{Symbol, Any, Tuple{Symbol, Symbol, Symbol}, NamedTuple{(:pc_type, :ksp_type, :ksp_monitor), Tuple{String, String, Bool}}})
@ PETSc C:\Users\caleb\.julia\packages\PETSc\LSdRz\src\ksp.jl:77
[4] PETSc.KSP(A::PETSc.MatSeqAIJ{Float64}, P::PETSc.MatSeqAIJ{Float64}; kwargs::Base.Pairs{Symbol, Any, Tuple{Symbol, Symbol, Symbol}, NamedTuple{(:pc_type, :ksp_type, :ksp_monitor), Tuple{String, String, Bool}}})
@ PETSc C:\Users\caleb\.julia\packages\PETSc\LSdRz\src\ksp.jl:268
[5] SolveArray(A::PETSc.MatSeqAIJ{Float64}, nEq::Int64, nBlocks::Int64)
@ Main c:\Users\caleb\Documents\Classes_Grad.5\FVM\gitlab\FVM_US2D\temp_tests\petsc_parallel.jl:31
[6] Test()
@ Main c:\Users\caleb\Documents\Classes_Grad.5\FVM\gitlab\FVM_US2D\temp_tests\petsc_parallel.jl:61
[7] top-level scope
@ c:\Users\caleb\Documents\Classes_Grad.5\FVM\gitlab\FVM_US2D\temp_tests\petsc_parallel.jl:73
I believe these errors are because PETSc is expecting a row-sequential adjustment of the values, whereas I am treating the matrix in a block-sparse fashion. I could reformulate the routines to operate in a row-sequential manner and see if the issue resolves, but I hope to parallelize the adjustment of the matrix values as the systems I am solving get larger, which is going to be an inherently non-sequential operation. Since the failings seem to be random, I am struggling to properly debug the issue and I believe it is a lack of understanding on my part of how PETSc handles the memory of matrices on the backend. A toy example following the same procedure as my CFD code is provided at the bottom of this post. I am hoping that somebody can help me understand what is causing this issue, or if there is a better approach that I could be taking for creating and solving the system. This is my first larger Julia project and my first venture into PETSc, so I appreciate any assistance people can provide! Please let me know if there is any more information that would be beneficial
Here is the code. I usually have to run it multiple times before it errors. I was initially adjusting A in parallel (hence the FLoops) and I originally thought that the threading was causing the issues I am seeing. However, the problems still occur in serial
using FLoops
using PETSc
function PopulateArray!(A, nEq, nBlocks)
# Adjusts A entries along block diagonal
for i in 1:nBlocks
# Get indices of block in array
block_ind = ((i-1)*nEq + 1):(i*nEq)
# Assign random values to sub-matrix
A[block_ind, block_ind] = randn(Float64, nEq, nEq)
end
end
function SolveArray(A, nEq, nBlocks)
# Adjust values of A matrix
PopulateArray!(A, nEq, nBlocks)
# Assemble A matrix
PETSc.assemble(A)
# Create right-hand-side vector (Ax = rhs)
rhs = randn(Float64, nEq*nBlocks)
# Create a solver
ksp = PETSc.KSP(A; pc_type="ilu", ksp_type="gmres", ksp_monitor=false)
# Solve system
x = ksp\rhs
# Destroy solver to free up memory
PETSc.destroy(ksp)
end
function Test()
# Creates a block-diagonal system of equations and solves with PETSc
# -------------------------------------------------------------------
# Number of solver iterations to run
nit = 1000
# System of equation parameters
nEq = 4 # Size of sub-matrices
nBlocks = 100 # Number of sub-matrices
# Initialize PETSc block-diagonal sparse array
petsclib = PETSc.petsclibs[1]
PETSc.initialize(petsclib)
A = PETSc.MatSeqAIJ{Float64}(nEq*nBlocks, nEq*nBlocks, ones(Int64, nEq*nBlocks).*nEq)
println("Starting Iterations")
for i in 1:nit
# Pass array to solver
sol = SolveArray(A, nEq, nBlocks)
end
println("Iterations Concluded")
# Finalize PETSc
PETSc.destroy(A)
PETSc.finalize(petsclib)
end
Test()