I am converting an algorithm from python to julia that uses eigen decomposition in the “hot loop” updating the solution. Python version uses scipy eigh
function, which I already learned needs to be translated as eigen(Symmetric(array, :L))
to get matching results.
I am doing the conversion in a very naive way - basically trial-and-error translation line-by-line and checking if the results match. I thought I got everything correct and running one iteration of the loop returned the same results in both languages, but the whole algorithm does not work in Julia.
I think I narrowed it down to divergence in eigen
results, that is , first couple of iterations decomposition is either the same or very very close, but then it diverges so much, that the julia version does not want to converge at all (contrary to the python).
Here is an MWE using just eigen
(I added PythonCall python version to see it this will change anything, but the results are the same as calling directly from python):
using LinearAlgebra
using PythonCall
sp = pyimport("scipy")
np = pyimport("numpy")
dat = [1. 2 3;
4 5 6;
7 8 9;
]
newdat = dat * dat'
z = eigen(Symmetric(newdat, :L))
for i in 1:10
newz = z.vectors .+ 1
z = eigen(Symmetric(newz, :L))
display(z.vectors)
end
pynewdat = np.array(newdat)
q = sp.linalg.eigh(pynewdat)
for i in 1:10
newq = q[1] + 1
q = sp.linalg.eigh(np.array(newq))
display(q[1])
end
It diverges for me on the third iteration.
My common sense would dictate, that the algorithm should work regardless of variation of 1e-5 range, since both languages are calling the same LAPACK routine, right? Both represent numbers as Float64
. So why the difference? Can it be mitigated somehow?