From profiling my code in Malab and Julia roughly 97% of the time is spent in polyeig for Julia compared to ~ 85% for Matlab. I think it is due to a type inference error
MWE
using NonlinearEigenproblems
using Traceur
A0 = [1 0 ; 0 1]
A1 = [im 0 ; 0 im]
pep = PEP([A0,A1])
λ, v = polyeig(pep)
@code_warntype polyeig(pep)
@trace polyeig(pep)
@code_warn_type
returns
MethodInstance for NonlinearEigenproblems.NEPSolver.polyeig(::PEP)
from polyeig(pep::PEP, vargs...) in NonlinearEigenproblems.NEPSolver at C:\Users\hanevyn\.julia\packages\NonlinearEigenproblems\S9WLC\src\method_companion.jl:86
Arguments
#self#::Core.Const(NonlinearEigenproblems.NEPSolver.polyeig)
pep::PEP
vargs::Tuple{}
Body::Tuple{Any, Any}
1 ─ %1 = Core.tuple(NonlinearEigenproblems.NEPSolver.ComplexF64, pep)::Core.PartialStruct(Tuple{DataType, PEP}, Any[Type{ComplexF64}, PEP])
│ %2 = Core._apply_iterate(Base.iterate, NonlinearEigenproblems.NEPSolver.polyeig, %1, vargs)::Tuple{Any, Any}
└── return %2
While the first few lines of @trace
gives lots of dynamic dispatch warnigns.
I have two questions;
Is this the reason for the slower performance?
Is it something I am doing wrong or should I raise the issue with the NEP-PACK developers?
I’m a new julia user and I apologies if I am asking this in the wrong place.
Cheers.
1 Like
Is this a representative benchmark for your actual use case? If you are allocating zillions of 2x2 matrices it will have a lot of overhead! You are better writing in terms of StaticArrays in that case.
(I’m not sure if NEP Pack has good support for StaticArrays, but it is trivial to rewrite a 2x2 polynomial eigenproblem as an ordinar linear eigenproblem. Indeed, isn’t your problem already linear? Why do you need NEP Pack for the 1st-order case? A 2x2 linear eigenproblem will be blazingly fast with StaticArrays.)
My code is finding points on a neutral curve for the blasius flow. So I have a quadratic eigenvalue problem where the matrices are redefined in a loop for different temporal wavenumbers and Reynolds numbers.
The code above was just the simplest example I could come up with that gave the same dynamic dispatch warnings
How big are the matrices? If they are tiny, StaticArrays is applicable. If they are huge, dispatch is probably not a significant factor.
For a quadratic eigenvalue problem, you might just want to use the companion linearization directly. Pre-allocating your matrices and updating them in-place might also be a good idea.
If you are finding points on a curve, you might consider whether your problem can be formulated as numerical continuation.
1 Like
Cheers mate, I’ll have a look.
My matrices range from 50 x 50 to 100 x 100 (for other related problems) so I’m not sure if static arrays will be too helpful.
The nep-pack code works which is the main thing. I’m just curious to see if I can attain some of the high performance people talk about in Julia.
Thanks for your help.
If most of your time is spent in generic dense linear algebra (e.g. eigenproblems) for reasonably large matrices, you’re generally not going to see much speedup — essentially all languages (Matlab, Python, R, Julia) are calling the same library (LAPACK) by default, so they are going o be the same speed if you link the same version of LAPACK/BLAS (e.g. you might try MKL.jl) and use the same number of threads.
For relatively small but not tiny matrices (50x50 to 100x100 qualifies), then you can sometimes eke out some speed gains by working in-place, which Julia allows you to do at the price of calling lower-level routines, e.g. updating your matrices in-place, or better yet updating the companion matrix directly, and then e.g. calling eigen!
.
(You’re right that this is too big to benefit from StaticArrays.)
1 Like