Hi, I have seen the package Yeppp been mentioned here (Fast logsumexp). This might be an ignorant question but when adding Yeppp it wont install, and I get some errors telling me to Pkg.build instead, which also does not work, and I wonder why.
Is it no longer preferred to use the package? Or maybe the performance is now included in LinearAlgebra?
New to Julia, using 1.8.3, and just want to check if using Yeppp makes a difference in improving parts of a numpy/scipy-based-computation , specifically the step sum(log.(eigvals(Symmetric(A)))) where A is Matrix of Float64, different sizes every computation ranging from 2x2 to 400x400
The repo for that package has been archived and has had no commits for four years, which is quite long in Julia time. I don’t know if it’s been superceded by something or just stopped being developed.
By the way, a simple improvement is to use sum(log, x) rather than sum(log.(x)). It should make a tiny difference compared to all those eigs, but a difference nonetheless.
I see sum(log, ex) is better, at least it allocates less.
Can I ask a follow-up question regarding eigvals? I wonder if there is an obvious code that could improve performance that I am missing. Would LoopVectorization.jl be something to look into for this operation too?
The basic idea is to express the function of A that you want to trace over (in this case log) as an expansion in Chebyshev polynomials P_n(x), which end up giving you the sum you want in terms of psi'*P_n(A)*psi, where psi is a random vector. These methods are crazy efficient for very large sparse matrices.
Probably for 400x400 dense matrices this doesn’t make much sense though.
@pablosanjose@Elrod Thank you, I just tried vmapreduce() and it helps in this calculation, it seems awesome. As you mentioned earlier, it makes a tiny difference compared to the eigvals!(Hermitian(A)). Do you know if it is possible to improve this eigvals-computation?
I forgot to mention the matrices are dense, otherwise I would try out the O(N) spectral method you mentioned above. Thanks alot for the suggestion.
Note that this is the same thing as the log-determinantlogdet(A), which is much faster than computing eigenvalues. Since it seems like you know that your matrix A is positive definite (as otherwise log(λ) will throw an error), you can often do even better still by using logdet(cholesky(Symmetric(A))):
julia> using LinearAlgebra, BenchmarkTools
julia> BLAS.set_num_threads(1) # just a single core for benchmarking
julia> A = randn(1000,1000); A = A'A; # random SPD matrix;
julia> @btime sum(log, eigvals(Symmetric($A)))
68.893 ms (11 allocations: 7.99 MiB)
5902.6715053545195
julia> @btime logdet($A)
18.700 ms (3 allocations: 7.64 MiB)
5902.671505354289
julia> @btime logdet(cholesky(Symmetric($A)))
11.704 ms (2 allocations: 7.63 MiB)
5902.671505354335
The log-determinant is a useful function with lots of beautiful properties!