I am writing an iterative algorithm. The function here below is part of the main function and performs a line search dividing by two the step size until the loss function is decreased. All variables are appropriately initialized in the main function and the algorithm works well. Now, if i switch line (1) with line (2) or (3), which are here below commented, the overall algorithm converges slower. Some debugging shows that the computation of the loss somehow changes and that this happens only if the for i
loop does not break all the times at the first pass. I am puzzled. Any idea what is going here?
function _linesearch()
for i = 1:lsmax
M = (1.0/i * direction) + I
Bβ = B * M
πβ = [Hermitian(M'*D*M) for D β π] # (1)
# for j=1:length(π) πβ[j] = Hermitian(M'*π[j]*M) end # (2)
# map!(D -> Hermitian(M'*D*M), πβ, π) # (3)
lossβ = -(logabsdet(Bβ)[1]) + 0.5*sum(mean(log, [Diagonal(D) for D β πβ]))
lossβ < loss && break
end
return πβ, Bβ, lossβ
end
PS: in the main function _linesearch
is called as
π, B, loss = _linesearch()
and π
is used to find a new direction
.