Functors instead of arrays, matrices etc

So, @stevengj, to sum up:
If I need to use a lot of matrices with different sizes, I could preallocate the whole matrix and make views into it to avoid the reallocation cost.

If I assume that the standard matrix low-level implementation is an array with an indexing algorithm:
I could write a simple array of the maximum size, make views and map it into a matrix, to avoid placing the columns of the smaller matrices far away one from another in the memory, considering the undefined allocations not viewed in such smaller matrices.

However this reduces the performance of matrix multiplication algorithms, if the real low level implementation of matrices are not just arrays and indices.

I know matrices can be symmetric, hermitian etc, and this really enhances the algorithms. However, can’t I apply these properties, on the matrix defined by mapping the array I use as storage?

I still think you are overthinking this without having a solid ground for reasoning. Write the naive, simple version first and benchmark it. If it turns out to be too slow come back here and we will help you improve it. Everything will be much easier once there is a bit code to actually execute and measure. Without this basis everything will be speculation.

3 Likes

So, I actually tried with a 10.000*10.0000 undefined allocated values matrix and actual 500 values view matrix out of it

I compared with and without MKL with the basic @view usage and the manually implemented new type, which has a storage_size and a nominal_size, all defined over a vector. In this way i actually put all the little_matrix elements next to each other. This would have been impossible without defining it myself, after having all the matrix allocated.

The benchmark says that they are comparable for a eigenvalues retrival