Just a quick announcement that I’ve recently packaged up a reasonably clean implementation of the adaptive cross approximation (ACA), which is a workhorse tool for me for assembling low-rank approximations of matrices that I can’t afford to instantiate. The package is available as ACAFact.jl. I haven’t registered it yet in case somebody here has suggestions on a better name or something, but I figure I’ll register it pretty soon, so let me know if you have thoughts!
The package is pretty simple and exports one function. It’s a very simple algorithm to implement and the package has no dependencies and is (I hope) pretty readable source code. Here is a brief demo ripped from the README:
using LinearAlgebra, ACAFact
# Make a low rank matrix:
pts1 = range(0.0, 1.0, length=100)
pts2 = range(0.0, 1.0, length=110)
K = [exp(-abs2(pj - pk)) for pj in pts1, pk in pts2]
@show rank(K) # 10
# rank 40 approximation K \approx U*V':
(U, V) = aca(K, 40)
# _max_ rank of 40, but stops early if opnorm(K - U2*V2') < 1e-8
(U2, V2) = aca(K, 40, tol=1e-8)
Here’s a terse summary of the package and some features:
low rank approximations that do not need to instantiate an entire matrix (see the README for the interface for passing in objects where you only have methods for pulling individual rows/columns)
Resumable factorizations (see the README again), so that you can create a cheap low-rank approximation and then (assuming you allocated enough in U and V) “resume” the factorization should you decide you need more accuracy later
non-allocating options if you provide U, V, and an ACACache (again see the README), so you can avoid triggering GC
If anybody clones the repo and kicks the tires and finds any issues, I’d love to hear about it. Cheers!
Funny you should ask. I know and love that package, but I think there are some design discussions that would make sense to have before trying to merge it in and it seems worthwhile to keep this package on its own for a bit while that churn happens.
In particular, for the “matrix-free” approximation interface, I ask users to define ACAFact.{row!, col!}(buf, M::YourCoolObjecct, j), because to create the approximation you just need to pull individual rows and columns at a time. I looked at ArrayInterface.jl to see if there is a community standard for methods to directly pull rows and columns, but I didn’t see anything that exactly fit that bill—but I expect that as more eyes get on the code people will have interesting thoughts on how that interface should look. And it is actually a bit more restrictive than necessary to ask for Base.getindex(M::YourCoolObject, j, k) to be defined, because sometimes that isn’t convenient or economical but pulling rows or columns is. I think the LowRankApprox.jl people would look at ACAFact.{row!,col!} and (rightfully) say that the idea doesn’t really seem fleshed out enough to merge into a mature package like that.
Just a quick update that this package is now officially registered. I have also added a cute little demo in the ./example folder that computes something like a fast Gauss transform (FGT). It doesn’t throw an error or something if your matrix is too high-rank for the maxrank kwarg, so if you want to use it carefully you should probably look at the factorization resuming structure in the source code and tests and add that in.
But with that said, it’s still fun to achieve a nice speedup of over 100x in a few lines of code without having to think very hard:
# Timings on my machine (intel 12th gen laptop with power saving CPU governor)
@btime aca_fgt($xv, $yv, $va); # 2 ms, 17 alloc (10 MiB)
@btime slowgausstransform($xv, $yv, $va); # 265 ms, 4 alloc, (229 MiB)
@show maximum(abs, aca_fgt(xv, yv, va) - slowgausstransform(xv, yv, va)) # 6e-13
There are also a few other optimizations you could do if you were going to use that little demo code for real. But in any case, happy factorizing!