# Triangular Arrays in julia?

Does anyone know if triangular arrays have been implemented in julia or in a package yet?

3 Likes

Not that I know of. It would be really nice to have a package which makes it easy to do parallel computations on triangular arrays, since thatâ€™s very common in numerical analysis.

I was going to write a small type wrapping a vector, since what I really want is the indexing behaviour and memory savings, and element-wise operations, as opposed to any more mathematical operations. I could try start a dedicated package, but Iâ€™m not a mathematician or expert enough to know what people would want.

Base has `UpperTriangular` and `LowerTriangular` with dispatch to various BLAS functions. I do believe they still store the entire matrix through.

1 Like

Yes, I think thatâ€™s true, the upper triangular and lower triangular refers to which values are not zero.

They do store the whole matrix. Also, they are restricted to being square.

The wiki article is quite brief. Could you explain a little more about what kind of operations youâ€™d like a triangular array to support. In particular, what is â€śthe indexing behaviorâ€ť and â€śelement-wiseâ€ť operations in the context of triangular arrays?

So if youâ€™re familiar with the `dist` class from R, it is very similar.

It represents a matrix, or rather the lower half of a matrix underneath the diagonal (as the matrix is assumed to be symmetrical with `0` values as the diagonal, there is no point in storing both halfs), and so each row has a different number of cells.

Row one is one cell, row two is two cells, three has three and so on:

``````as.dist(matrix(rexp(100, rate=.1), ncol=10))
1           2           3           4           5           6
2  30.96809401
3   4.29517367 25.35477096
4  11.05042388 47.12602515  9.05635823
5  24.80794737 15.96819158  5.57638651 10.38488874
6   3.73206725 26.75391206  8.32492987  8.59412784  0.35015252
7  33.00415806  0.20160700  2.03088283  0.45323016 13.87089156 22.28731310
8   9.10296273  7.98441481  1.90013604  8.93347893  7.18713469  7.02904816
9   3.74577306  3.19449860  5.11027106  1.63343495 14.50446960  1.38334442
10 24.93290636  2.64181538  1.29009316  0.04882485  3.75045984 19.01684707
``````

Looking at the structure in R, you can see the data is laid out linearly:

``````> str(as.dist(matrix(rexp(100, rate=.1), ncol=10)))
Class 'dist'  atomic [1:45] 0.441 4.219 7.834 0.657 0.112 ...
..- attr(*, "Size")= int 10
..- attr(*, "call")= language as.dist.default(m = matrix(rexp(100, rate = 0.1), ncol = 10))
..- attr(*, "Diag")= logi FALSE
..- attr(*, "Upper")= logi FALSE
``````

And the array support indexing linearly i.e. `m[43]` but also by row/column `m[4, 5]`.

Question: is it possible to store upper/lower triangular matrices in compact form and still pass that representation efficiently to BLAS operations? I.e. can that storage format be represented with the kinds of strides that BLAS understands?

Sounds like `Symmetric`, except that it saves some space by not storing upper entries (as our implementation currently does) nor diagonal entries (which are assumed to be zero). Iâ€™d start by making `Symmetric` more efficient as regards memory layout in Julia Base. Not storing diagonal elements sounds like a secondary issue since for large arrays the cost will be relatively low.

LAPACK supports a packed storage format for symmetric matrices, so you could still get high performance for common linear algebra operations (e.g. matrix * vector product). (These optimized operations can be added progressively.)

1 Like

Is it still going to be much better to use BLAS? For square lower triangular matrices you have less than half of the operations to do, and with `@polly` being added, could the difference between BLAS and not-BLAS be around 2x?

(Is there a good way to switch between the BLAS and generic operations to do a quick test what the difference is like?)

I did not know about Symmetric, I can live with storing the 0 values in the diagonal!

Iâ€™ve no idea when it comes to BLAS Iâ€™m afraid.

Even if there are only about half operations to do, that 1/2 is a constant factor: the cost is still increasing with the size of the matrix. So that doesnâ€™t really make a difference from standard arrays, except for changing the threshold above which calling BLAS is beneficial.

For matrix multiplication, you can call `Base.LinAlg.generic_matmatmul` directly. Iâ€™m not sure for what operations and types BLAS is called, and for which pure Julia code is used instead.

You could just wrap a float in a new type and define the needed operations. This should dispatch to the generic versions of the linear algebra library.

For linear algebra, there is a significant overhead when using the packed format. This is partly because the indexing operations are slightly more complicated but more importantly, it is because the compact storage doesnâ€™t allow blocked (BLAS-3) linear algebra operations. This was the main reason I decided not to use the compact storage scheme when I implemented `XTriangular` and `Symmetric` since I thought that 2x memory saving was not sufficient to justify slower linear algebra. However, for other applications, this might, of course, be completely different.

An interesting alternative is the Rectangular Full Packaged storage scheme which maintains BLAS-3 performance without wasting half the memory. However, I never really needed the memory saving so the development stalled.

Thanks for the example. Is the linear index similar to the indexing into the underlying vector or is it similar to a full matrix representation, i.e. can you `m[n^2]` or is the last index `m[div(n*(n-1),2)]`?

Can you tell a little more about what kind of operations youâ€™d like to perform on the triangular/symmetric array?

Itâ€™d be cool to be able to do stuff like

``````a = UpperTriangular(ones(4,4))
a[:] = 1:10
``````

perhaps one solution would be to make it possible to make UpperTriangular views on the square matrix that just stored the indices of the upper triangle and allowed vector operations? That should not interfere with BLAS.

There have been discussions about adding functionality to only reference the non-zero part of a matrix more generally (i.e. also for `SparseMatrixCSC` and `SymTiridagonal`) and that could possibly solve this issue.

1 Like

That would clash with generic linear indexing. Of course, weâ€™re looking to move away from that since itâ€™s not an efficient mechanism for writing generic algorithms on all array types, but giving it a different meaning seems like asking for trouble.

1 Like

Ok, I see. Well, looking forward to see what the new alternatives will be Itâ€™d be cool if they also made it easy to work with only the defined elements of Triangular matrices.