Matrix Vector Operations to For Loop



This could be wishful thinking, but it would be neat if this has been or could be implemented nicely.

For my program I have lots of linear operators and maps in the form of sparse matrices. The non-zero structure and values are unchanging with lots of repeating values.

Due to the repetition of values and unchanging structure of the sparse matrices, the multiplication of a vector by these operators/maps could be described by a for-loop with minimal memory requirements. Theoretically, this should be much faster than the sparse matrix multiplication as the time to lookup values of the sparse matrix would be nearly eradicated. The few unique non-zero floats could be held in the cache during the loop.

However, unlike for-loops, the sparse matrix representation of these operators/maps is very nice. Is there a way to transform these sparse matrices into for-loops automatically?


Look at LinearMaps.jl, specifically the FunctionMaps.

DifferentialEquations is developing a similar FunctionMap specifically for PDE simulations. It will create these kinds of lazy operators directly from descriptions of the derivative operator to discretize (FDM), and this idea for constant matrices in FEM sounds like it could be in the scope of the same project. I’ll let you know when we have results here. I think this can be accomplished with a smart but almost hacky use of a generated function.


Can you elaborate how LinearMaps would assist in solving the question asked?


Sounds like he might know the structure of the matrices he’s making, in which case it doesn’t need to be a sparse matrix and could instead be a function, and a FunctionMap makes creating this along with all of the extras pretty easy. Of course, the OP didn’t give many details, but there’s a good chance that’s all that’s needed to solve the problem.


Okay, to me it sounded like actually writing the linear map was the problem and not how to simply apply an existing one to a vector. Oh well.

@mcovalt What do you mean with

Sparse matrix vector multiplication doesn’t really need any “lookup”, at least not in the sense as when you try to access an arbitrary element in the matrix.


This is a great suggestion if I handmade the functions, but it would be cool to have them be created automatically.

That is exactly what I’m doing, too. I’ll have to check out that package.

I’m sure it was a poor choice of words on my end. I’ve been told that sparse matrix vector products are very memory intensive. It makes intuitive sense to me that this means the performance is bound to the access of possibly irregular points in memory. For large sparse matrix vector products, I could see this being quite the bottleneck. But at this point I’m talking way over my head and am likely wrong. I’d love a formal explanation.