Thanks for your replies!
At the risk of posting even more code, I defined the function with everything in it
function main()
M = 6
N = 6
L = 10
U = 1
KX = range(0, π / N, ceil(Int, L / 2))
KY = range((-π + 2π / L) / M, π / M, L)
Hp = Matrix{Matrix{ComplexF64}}(undef, length(KX), length(KY))
Hm = Matrix{Matrix{ComplexF64}}(undef, length(KX), length(KY))
for iky in 1:length(KY), ikx in 1:length(KX)
Hp[ikx, iky] = Hermitian(rand(ComplexF64, 4M * N, 4M * N))
Hm[ikx, iky] = Hermitian(rand(ComplexF64, 4M * N, 4M * N))
end
Delta = rand(144, 144);
deltanew = zeros(M * N);
for iky in 1:length(KY), ikx in 1:length(KX) #KX and KY are ranges defined earlier
HBdG = [Hp[ikx, iky] Delta; Delta' -conj(Hm[ikx, iky])]
#= This builds the matrix to be
diagonalized using Hp and Hm, which are arrays of 144 x 144 matrices and Delta, which is a 144 x 144 matrix. All of these matrices are complex-valued. =#
time = @elapsed((Ep, Up) = eigen(HBdG)) # I time the diagonalization.
println(time) # The results of the timing are printed. For results, see post 8 in this thread.
#=The rest is just number-cruching the eigenvalues and eigenvectors. I assume it's
not important for this discussion, but I include it anyway. =#
Up11 = @view Up[1:4*M*N, 1:4*M*N] #M and N are Int64's previously defined.
Up21 = @view Up[4*M*N+1:8*M*N, 1:4*M*N]
Up12 = @view Up[1:4*M*N, 4*M*N+1:8*M*N]
Up22 = @view Up[4*M*N+1:8*M*N, 4*M*N+1:8*M*N]
Um = conj([Up22 Up21; Up12 Up22])
Etop = @view Ep[4*M*N+1:8*M*N]
Ebott = @view Ep[1:4*M*N]
Em = -[Etop; Ebott]
Gp = transpose(Up * (diagm(Ep) .< 0) * Up')
Gm = transpose(Um * (diagm(Em) .< 0) * Um')
deltanew += U / L^2 * (diag(Gp[5*M*N+1:6*M*N, 1:M*N]) .+ diag(Gm[5*M*N+1:6*M*N, 1:M*N])) # U is a Float64 and L is an Int64, previously defined.
end
end
and I ran
using LinearAlgebra
using MKL
using BenchmarkTools
using Statistics
main()
which gave eigen
times basically identical to the above example, about .025 s or so. Have I made any mistakes?
If I may attempt a summary, it seems that for diagonalizing generic matrices repeatedly, which just involves calling LAPACK or BLAS, using MATLAB is faster. However for more “interesting” problems (but not necessarily more realistic problems! – this example was not contrived, indeed many physicists use a method like this to solve self-consistent mean-field theories), Julia has an edge since it can handle many operations faster. After playing around benchmarking things here and there (aside from the eigensolver) and comparing with MATLAB, that does seem to be the case. I will continue with Julia and see if I can use it to solve something more involved.