Creating looping matrixes (Torus-like) with type inheritance

Context :

I am trying to create a cellular automata and want its behaviour on edges to be looping (ex : neighbour of the cell at [0,0] are the cells at [0,end] and [end, 0] (as well as [0,1] and [1,0] )
Some kind of torus-representing matrix (when 2d).

My hypothesis

Maybe a deformation from python OOP, but I thought that creating a subclass of Matrix that would have special properties such as Matrix[0] = Matrix[end] etc… would be the easiest approach

This way, if I ever intended to generalise it to a 3x3 or nxn Matrix, it would not change so much in the code because i would just be able to say Matrix[ ... , coord-1] to loop back
And similarly if i wanted to extend my reach, I could do Matrix[coord - k] and go as far as I want from the end, looping.

I’ve looked at several posts on Inheritance etc but it seems like Julia isn’t built for this type of programming, so here are my questions :

  • Is it worth trying to find a way to create this struct ?
  • What’s the better way to code this behaviour and make the code reusable as much as possible

I don’t mind installing Packages but I’d like to keep my code as simple as possible, I’ve seen the Class.jl package that might offer what I’m looking for but I’m not sure about it.

Note : I know that, in my case, I could do a borders check and filter values to make them modulo the matrix size but that doesn’t seem neither aesthetic nor very functional. Tell me if I’m wrong

Sure, this is something that is very natural to write in julia. There’s a few implementations of it kicking around, but it might be informative for you to try implementing it yourself.

Here’s some documentation on julia’s Abstract Array interface: Interfaces · The Julia Language

You just need to make a struct that wraps another Array, and then intercepts getindex and setindex! calls, and transforms the indices to match your desired boundary conditions.

1 Like

This type of thing has been discussed before, but I usually advocate “ghost cells” (periodic padding) rather than paying the price of wrapping indices on every array access. See e.g. Arrays with periodic boundaries - #4 by stevengj

1 Like

Sorry for the delay in the answer, I’ve been busy.

I can’t find out how to rewrite getindex (first one I’ve tried for now)

I’ve created my own struct :

struct LoopArray <: AbstractArray
 #what do I do here ?
end

And then I’ve looked at the code for get index in essentials.jl and in abstractarray.jl but they’re far too complicated to recode because they’re built first with an int index first and then generalized to vector indices.

So I’ve thought about saying something like

Function getindex(LoopArray, index:: AbstractArray)
  index = index modulo array size
 getindex(LoopArray, index) # calling base function now that it's corrected

But won’t my second call just call the first one and loop ?
I’ve searched similar implémentations but couldn’t find one with my needs

Edit : just found this : Arrays with periodic boundaries - #3 by jonathanBieler , I’ll try based on this for now, haven’t finished yet

I’ve not used them myself, but these two packages seemingly implement the functionality you want:

PeriodicArrays.jl
CircularArrays.jl

If nothing else, perhaps they can be helpful in writing your own type.

1 Like

The array interfaces manual page (also linked above) is useful here. In particular, it says that (by default) vector indexing is implemented in terms of scalar indexing and that it is sufficient to implement either getindex(A, i::Int) (for IndexLinear arrays – not your case) or getindex(A, I::Vararg{Int, N}) (for IndexCartesian arrays – your case). But because you want to be able to index “outside” of the array’s size, we’ll need to implement the generic indexing function as well (since otherwise it will check bounds). There might be a more elegant way around this, but it’s not coming to me at this moment.

Here’s an example implementation:

struct LoopArray{T, N, A<:AbstractArray{T, N}} <: AbstractArray{T, N}
  arr::A
end

# not needed since it's the default
# Base.IndexStyle(::Type{<:LoopArray}) = IndexCartesian()

Base.size(x::LoopArray) = size(x.arr)

# scalar indexing
function Base.getindex(x::LoopArray{<:Any, N, <:Any}, inds::Vararg{Int, N}) where N
  looped_inds = mod1.(inds, size(x))
  return getindex(x.arr, looped_inds...)
end

# general indexing
# here, we simply use a comprehension to call the scalar version a bunch of times
# there is probably a better way to do this, but it's beyond this example
Base.getindex(x::LoopArray{<:Any, N, <:Any}, inds::Vararg{Any, N}) where N = [x[i...] for i in Iterators.product(inds...)]

then

julia> A = LoopArray(reshape(1:8, 2, 4))
2×4 LoopArray{Int64, 2, Base.ReshapedArray{Int64, 2, UnitRange{Int64}, Tuple{}}}:
 1  3  5  7
 2  4  6  8

julia> A[0:3, 2:6]
4×5 Matrix{Int64}:
 4  6  8  2  4
 3  5  7  1  3
 4  6  8  2  4
 3  5  7  1  3

While I’ve provided this example for educational purposes, you should probably prefer one of the above-suggested packages if they meet your needs.

Note that unlike with a class, the functionality is not described within the struct definition. Instead, in Julia we add specialized methods to the interface functions (such as Base.getindex). Note that it’s necessary to write Base.getindex rather than just getindex because otherwise we’ll simply create a new getindex function that is not what’s actually called by A[inds...].

3 Likes

I’d assume smart compiler to avoid checking each access given it knows the shape of the array (Like in FixedSizeArray.jl).
For instance in the context of convolution I’d expect to create head, middle and tail loops.

I am not sure, but could it be that StaticKernels.jl avoids that overhead?

For typical Arrays the compiler doesn’t know the size of the array — runtime sizes are typical of most applications. Even for statically sized arrays (but large enough that the loops are not unrolled), I’d be skeptical about assuming that this optimization occurs without checking.

Besides, ghost cells are much more flexible, because they separate the boundary conditions from the code working with the array, and it is easy to have a mix of boundary conditions (e.g. Dirichlet in one direction and periodic in the other). They also work well with distributed-memory parallelism, where the boundary elements are communicated from another process.