# Initialise 2x2 block-diagonal matrix

I’m writing code using some linear algebra, where I need to set up a 6Nx6N matrix (called F), with a 2x2 block layout, each 3Nx3N block being itself composed of N^2 3x3 sub-blocks.

The initialisation of F should be such the diagonal of each 3Nx3N block is filled with ones.

Here’s the code I’m using:

``````T = Float64
N = 2
using LinearAlgebra

F = Matrix{T}(I, 6N, 6N) # T is inferred from the type of another variable
# main diagonal is filled with ones, good
# how do I do the same with the 2 off-diagonal blocks, without pre-allocating dummy identity blocks?

using StaticArrays
dummyI = SMatrix{3,3}(I)
for i=1:N
F[3N .+ (3i-2:3i), 3i-2:3i] = dummyI
F[3i-2:3i, 3N .+ (3i-2:3i)] = dummyI
end

# ... later on
#  I do something with this matrix, filling non-trivial elements etc.
for i in 1:N
for j in i+1:N
# F[3i-2:3i, 3j-2:3j] = ... fill specific off-diagonal 3x3 subblocks
end
end

``````

I wonder what would be the best way to initialise F without the dummy 3x3 identity block and the for loop to assign those to the two off-diagonal blocks.

BlockArrays seems like a logical option but I’m not too sure how to avoid temporary intermediates either (mortar((A,B),(C,D)) would be even worse, if I need to expand the blocks A,B,C,D).

Sounds like you just want:

``````I₃ = Matrix{T}(I, 3N, 3N)
F = [I₃ I₃; I₃ I₃]
``````

or equivalently

``````F = diagm(0 => ones(T,6N), 3N => ones(T,3N), -3N => ones(T,3N))
``````

or equivalently

``````F = kron(ones(T,2,2), I(3N))
``````

All of these match the `F` constructed by your code above.

1 Like

Thanks for this; I was hoping for a way to avoid expanding the 3Nx3N matrix as in the first solution (N could be in the thousands) – I am guessing the second and third solutions do just that? Both seem pretty neat.

The `diagm` solution will be the most efficient if that is a concern.

Of course, if your matrix is that big, you should think about using some kind of sparse-matrix structure.

Thanks again; I am curious why `diagm` will be more efficient than `kron` – is the latter creating temporary objects?
I don’t think I would gain from using sparse matrices since the block-identity structure is only for the initialisation; later in the code it becomes a fully-dense matrix (electromagnetic coupled dipole interaction matrix – I imagine that’ll speak to you ; )

Because `diagm` is just populating the matrix, whereas `kron` has to do computations.

In that case (DDA, a form of VIE) it’s more a matter of using matrix-free algorithms like fast-multipole… There has been some recent nice work on fast circulant preconditioners for this ([1903.07884] Circulant preconditioning in the volume integral equation method for silicon photonics)

DDA is different in that it discretises bodies in a regular cubic lattice of point dipoles; in CDA (same equations, naturally – Markel has good recent reviews on it) the dipoles can be placed (and oriented) arbitrarily, and typically one dipole represents a particle (in the Rayleigh-Gans regime).
The block-Toeplitz form of the DDA interaction matrix arises from the periodicity in dipole positions, but it’s not the case in CDA.

Makes sense thanks, I guess I should look at the source code to see if those calculations are likely time-consuming. This was more for general knowledge though; in my use-case I only create the matrix once and it’s definitely not a bottleneck – just trying to build good habits in julia, and learn more about the language at the same time.

I’ve been looking for a good iterative solver for these coupled dipole equations (no Toeplitz structure, so circulant won’t work), especially in the case where the right-hand side has multiple columns (can be in the hundreds for good orientation-averaging of cross-sections by spherical cubature. Haven’t been successful so far (this seems promising though), so I use brute-force solve.