MPI parallel computation with SMatrix

I am a beginner in Julia. I would like to create a Julia code to send Matrix{SMatrix{2,2,ComplexF64,4}} to a neighboring process in MPI parallel computation. Here is the code I am currently working on. Thank you in advance.

using StaticArrays
using LinearAlgebra
using MPI

# Set the grid size
L = 500
# Set the k-space mesh
k_mesh = range(-π, stop=π, length=L)

# Function to generate the Hamiltonian matrix
function Hamiltonian(kx, ky)
    return @SMatrix [cos(kx) im*sin(ky); -im*sin(ky) cos(kx)]
end

# Main function
function main()
    # Get the MPI communicator
    comm = MPI.COMM_WORLD
    # Get the rank of the current process
    myrank = MPI.Comm_rank(comm)
    # Get the total number of processes
    size = MPI.Comm_size(comm)

    # Helper function to calculate the split range
    function start_and_end(N, comm)
        nprocs = MPI.Comm_size(comm)
        myrank = MPI.Comm_rank(comm)
        if N % nprocs != 0
            println("Error! N % number of processes must be 0.")
            MPI.Abort(comm, 1)  # Properly terminate MPI
        end
        # Calculate the range assigned to each process
        nbun = div(N, nprocs)
        ista = myrank * nbun + 1
        iend = ista + nbun - 1
        return ista, iend, nbun
    end

    # Calculate the rank of neighboring processes
    dst = mod(myrank + 1, size)
    src = mod(myrank - 1, size)
    if src < 0
        src += size
    end

    # Calculate the range assigned to the current process
    ista, iend, nbun = start_and_end(L, comm)
    # Extract the k-mesh for the assigned range
    k_mesh_i = deepcopy(k_mesh[ista:iend])
    # Generate the Hamiltonian matrix for each k-point
    Ham_i = Hamiltonian.(k_mesh, k_mesh_i')

    # Extract the first and last rows
    Ham_upper = deepcopy(Ham_i[1, :])
    Ham_downer = deepcopy(Ham_i[nbun, :])

    # Convert SMatrix to standard Julia arrays for MPI communication
    Ham_upper_array = reduce(vcat, [vec(h) for h in Ham_upper])
    Ham_downer_array = reduce(vcat, [vec(h) for h in Ham_downer])

    # Prepare buffers for receiving
    Ham_upper_recv = similar(Ham_upper_array)
    Ham_downer_recv = similar(Ham_downer_array)

    # Start asynchronous communication with neighboring processes
    rreq = MPI.Irecv!(Ham_upper_recv, comm, source=src, tag=src + 32)
    sreq = MPI.Isend(Ham_downer_array, comm, dest=dst, tag=myrank + 32)

    # Wait for communication to complete
    MPI.Waitall([rreq, sreq])

    # Reconstruct the received data back into SMatrix format
    Ham_upper_received = [@SMatrix ComplexF64[Ham_upper_recv[i], Ham_upper_recv[i+1]; Ham_upper_recv[i+2], Ham_upper_recv[i+3]] for i in 1:4:length(Ham_upper_recv)]

    # Output communication logs
    println("$myrank: Sending   $myrank -> $dst = $sreq")
    println("$myrank: Received $src -> $myrank = $rreq")
end

# Initialize MPI
MPI.Init()
# Execute the main function
main()
# Finalize MPI
MPI.Finalize()

There’s no question in your post, so I’m not sure what you need help specifically. I tried to run your code and there’s an error due to line

    Ham_upper_received = [@SMatrix ComplexF64[Ham_upper_recv[i], Ham_upper_recv[i+1]; Ham_upper_recv[i+2], Ham_upper_recv[i+3]] for i in 1:4:length(Ham_upper_recv)]

The syntax [a, b; c, d] is not valid to create a matrix:

julia> [1, 2; 3, 4]
ERROR: syntax: unexpected semicolon in array expression around REPL[1]:1
Stacktrace:
 [1] top-level scope
   @ REPL[1]:1

Maybe you meant something like

julia> [1 2; 3 4]
2×2 Matrix{Int64}:
 1  2
 3  4

? If I replace the line above with

    Ham_upper_received = [@SMatrix ComplexF64[Ham_upper_recv[i] Ham_upper_recv[i+1]; Ham_upper_recv[i+2] Ham_upper_recv[i+3]] for i in 1:4:length(Ham_upper_recv)]

then your program runs without errors, but I also don’t know if that’s enough for you, since you didn’t say what you wanted help with specifically to start with.

Side note, in

    dst = mod(myrank + 1, size)
    src = mod(myrank - 1, size)
    if src < 0
        src += size
    end

the if is completely useless since src is always non-negative, for non-negative values of size (which is what MPI.Comm_size(comm) should always give you, and if both src and size were negative then adding size to src isn’t going to give you a non-negative number anyway): the output of mod has always the same sign as the second argument. To summarise: delete the lines

    if src < 0
        src += size
    end
3 Likes