Thatâ€™s different though, and should be handled differently. When it has a pattern like a stencil, you can just implement the stencil like is done in DiffEqOperators.jl

That just does the stencil computation during `A_mul_B!`

(and thus `*`

) so thereâ€™s no need for it to understand the non-zero elements because it directly understands the sparsity pattern (a band of size dependent on the order, so based off of type parameters).

Banded matrices also have a bunch of special properties, so you wouldnâ€™t want to put them into sparse matrix algorithms even if they knew to only loop over the band.

That said,

Iâ€™ve actually been wondering about finding an implementation for something like this myself. I have a few things in mind. For example, a static sparse vector could be a very easy way to extend abstract array based APIs in a way that is â€śselectiveâ€ť and concise in code, yet efficient.

As an example, letâ€™s say we are solving a large PDE via method of lines and thereâ€™s some components that we know will be on the shallow end of a temperature gradient and so those 6 specific points should have a lower absolute tolerance. The API for these things is always, if itâ€™s an array then itâ€™s element-wise. So what I want is, instead of creating an array `1e-6*ones(N)`

and then changing a few values, specifying â€śa sparse array with a base of `1e-6`

and specifically 6 values at indices â€¦â€ť. Then itâ€™s easy to see what element-wise operations like `. *`

should be like for this kind of sparse array, but I can dramatically cut back on the number of arrays I am making if every option doesnâ€™t need to take up extra memory to accommodate element-wise settings. So a static sparse array is something I find myself wanting quite often (with the caveat of being able to declare â€śwhat zero meansâ€ť).

As for doing this with matrices, I think the way to go about it is to be a little bit more advanced. Full sparity I think is too much and will lead to large compile times. When I think about the applications which come up in PDEs that arenâ€™t banded (though banded is quite often!) a lot of times thereâ€™s a higher level special structure to take into account. For example, if you think about the 2D Laplacian on a uniform grid, what you get if youâ€™re to use â€śall x then next yâ€ť ordering is a tridiagonal matrix plus a band `N`

up and a band `N`

down. If there was a way to easily declare a matrix directly for this structure then that could be a massive speedup over pure sparse representations. This also comes up with systems of PDEs like the one I describe here:

where the sparsity pattern is really easy (and cheap) to describe in blocks (and the Jacobian is just constant bands in many of those blocks!). In some cases, if youâ€™re doing a spatial model of a more complex reaction network like HIRES

http://nbviewer.jupyter.org/github/JuliaDiffEq/DiffEqBenchmarks.jl/blob/master/StiffODE/Hires.ipynb

it can be very easy to state the local sparsity pattern, which would then be the location of the bands in the full PDEâ€™s sparsity pattern. Having an easy way to put these together would be a nice low-dimensional way to write down the form for these large matrices.

So where am I going with this? Okay, letâ€™s say you had the facility to say â€śthis is a matrix with a constant band a long the -ith diagonal of value xâ€ť and things like that (or if itâ€™s not constant, it could be a type with a single array), and have it generate efficient kernels for addition, multiplication, etc. (multiplications is really all you need for linear solving because youâ€™ll likely want to use an iterative solver in this case). We can build our full operator using the lazy composition tooling in LinearMaps.jl

where `A = B+C`

or `A = B*C`

can put together various `AbstractLinearMaps`

lazily so that `A*x`

acts appropriately. Thus our full linear operator `A`

(maybe a Jacobian of a PDE or optimization problem) would both be a very sparse and memory efficient implementation, and be able to specialize all of its operations on the pieces in order to turn everything into efficient loops.

So this went on for quite awhile, but what I hope I got across is that full sparsity is probably not whatâ€™s needed in most applications anyways. However, composable sparse matrix structures would be extremely useful for generating both memory and computationally efficient kernels. This is something I havenâ€™t seen done either (if anyone has seen it, please link because I am curious) and it would fit really well with Juliaâ€™s generic programming since these could just be passed in place of standard dense/sparse matrix types yet generate very efficient algorithms.