So, there is this C++ eigen feature which I really like: CWiseNullaryOp.

To sum it up, you can write a functor, which stores some values and returns some algorithms, and use this as a matrix.

This permits you to not store an entire “whatever you want” if you know its structure.

However they don’t really use this feature well.

An example would be a simple identity matrix written in csc or csr sparse format, which simply returns the index you ask, for row_arr and col_arr, and simply returns 1 for each nnz_arr value.

Is there something like that here?

I’m really generic cause I hope it’s a feature which can use the same api of “whatever you want”, so it can be used everywhere. In particular my usecase is sparse matrices and tensors.

MappedArrays.jl lets you write a function and use it as a matrix. For example, this creates a 3 \times 5 matrix A with entries A_{ij} = i + j that is not actually stored, but is computed on the fly:

Indeed, the built-in “range” types (e.g. 1:5) as well as the CartesianIndices types are examples of “array” types whose entries are computed on the fly rather than being stored.

If you just need to apply the matrix to vectors as a linear operator, e.g. for use in iterative algorithms, you probably want LinearMaps.jl or LinearOperators.jl instead.

For sparse matrices, Julia ships with SparseArrays, which uses CSC format. Is there some reason that this doesn’t work for you?

If I can use both this ways in functions which require matrices, arrays or tensors in general, just like matrix multiplication or sparse kronecker products, than I think you solved it.

If I want, for example, to store my matrix in an array, and define the usual algorithm to map it into a matrix, can i use mappedarray or some struct? I need to give it to libraries such KrilovKit and be sure it is recognized as matrix with the algorithm I defined.

I guess by “store in an array” you mean some data structure other than Matrix.

Sure, if you have a datastructure d and function getindex(d, i, j) that retrieves the (i,j) element of your matrix, you can do either

A = mappedarray(i -> getindex(d, i[1], i[2]), CartesianIndices((1:m, 1:n))) to wrap this in an m \times n matrix, or …

Define your data structure (or some wrapper around it) to be a subtype of AbstractMatrix, with methods like getindex and size etcetera.

In principle, a mapped array should work here, but is probably inefficient.

For an iterative solver like KrylovKit.jl, you don’t necessarily just want to expose a getindex function, so that it uses a fallback matrix–vector multiplication by the naive rows-times-column algorithm. Instead, you should ideally implement a fast way to multiply matrices by vectors (a mul! method) that exploits the structure of your matrix. (Presumably it has some special structure, as otherwise you should just use Matrix). The LinearMaps.jl and LinearOperators.jl packages can help with this (as opposed to MappedArrays.jl, which isn’t geared towards that application … if you have a MappedArray and want to multiply vectors by it over and over, it is probably faster just to convert it to a Matrix).

In general, what operations you want to define will depend on what library and what algorithms you want to use it with.

The fact is that if I have to build various matrices, even big ones (not like in the following example), i thought preallocating the biggest matrix and viewing inside it would be great:

N::Int64 = 100
m::Int64 = 10
matrix = Array{Float64}(undef, N, N)
for i in range 1:10
matrix_view = @view matrix[0:m, 0:m]
"Assign value to matrix view, find eigenvalues"
m += 10

However i expect, if the matrix is implemented as an array with algorithm, that the last element of the i column is very far from the first element of the i+1 column. Thta’s why I thought of storing it as an array, to solve this issue. But from what you say it’s not always a trivial array which is stored under the matrix representation.

(following messages were merged from another post I made, related to previous question)

I’m new to julia and the concept of AbstractType and AbstractMatrices.
I wanted to use krilovkit.jl and intead of preallocating a matrix and viewing inside it like this:

N::Int64 = 100
m::Int64 = 10
matrix = Array{Float64}(undef, N, N)
for i in range 1:10
matrix_view = @view matrix[0:m, 0:m]
"Assign value to matrix view, find eigenvalues"
m += 10

I wanted to preallocate a simple array and give to him an algorithm, with the adequate matrix api for krilov.jl and other libraries, like kronecker.jl and even tensorial, with different algorithms.

The reason of why I’m doing this is to put the values I’m going to use one next to the other in the allocated space.
I expect, in the first example, that the last element of a column of matrix_view and the first element of the next column of matrix_view, are far apart, as there are all the other undefined bytes preallocated between them.

Is there a way? And can something use this example to let me understand better how to use AbstractMatrices?

I don’t think that I understand your question fully, if you need help specifically about krilov.jl it would be wise to put it in the title of your post (I and probably many others don’t know this package).

I do know about AbstractMatrix and it is normally used to specify function arguments as illustrated in this example:

function f(a::Matrix)
a
end
function g(a::AbstractMatrix)
a
end
x = ones(2,2)
try
display(f(x'))
catch e
println("error encountered ", e)
end
display(g(x'))

The function g is better than f because it will accept things that look like a Matrix but aren’t such as transposed matrices.

The fact is that i want to write a struct which is recognized as a matrix, even if it’s an array with an algorithm which picks 2 indices as input and gives as output the index, of the matrix elements, in the array.

About the indexing you are right. my fault: bad habit.

KrylovKit.jl actually doesn’t require your input to be a subtype of AbstractMatrix. You can actually just pass a function that computes the action of the linear operator on a vector. See here

It is quite unclear to me what your desired linear operator actually should do. It appears to me that you try to optimize something based on some idea that the default approach is too slow. This is usually a bad idea as 1) guessing the performance of code usually goes wrong and 2) you don’t have a baseline to compare to. Also you seem quite new to Julia and it’s performance characteristics are quite a bit different to most other programming languages.

I would recommend just writing the straight-forward code you want to write and if that turns out to be too slow for whatever your usecase is, then you come back and post your code here. Usually a lot of people here enjoy optimizing random code snippets

mapping an array to a matrix is the usual way of working on them and it’s how they are all implemented under the hood.

I’m really bad at browsing through documentation and, even if I already tried to see examples of matrices not defined as standard julia matrices, I can’t still find them (really serious. I don’t know why i always miss tables and the right examples when i look for them). Do you know what chapter they are in? (like the one that appear on the left)

I am not sure why you talk about mapping an array to a matrix? Yes I agree that the underlying storage is linear. That makes reshapes essentially free. On top of that every array in Julia supports linear indexing so you can treat it as vector if you wish.

I am also not sure whether all this hassle is even worth it. If you can afford to allocate a giant vector/matrix with all your data at once then I doubt the trouble is even worth it over allocating the matrices separately.

The point I made about KrylovKit.jl taking functions as inputs just means you can write a callable struct or a closure that implements the index transformation you require.

@stevengj I’m going to delete (if possible) this one in a few minutes, if the issue is solved in the other one, honestly. Merging the thread there could be useful, even if the initial intent was to split them. Informations are a lot related