Not sure if that is simplify. But that line does the following.
With:
julia> X = Float64[ 1 2 ; 3 4 ]
2×2 Array{Float64,2}:
1.0 2.0
3.0 4.0
julia> using LinearAlgebra
julia> n = 2
2
Your line does:
julia> K = [( i >= j ? dot(view(X,:,i), view(X,:,j)) : 0.0 )::Float64 for i=1:n, j=1:n]
2×2 Array{Float64,2}:
10.0 0.0
14.0 20.0
and that can be written more explicitly as:
julia> K = Matrix{Float64}(undef,2,2) # Create empty 2x2 matrix
2×2 Array{Float64,2}:
2.5e-323 6.92624e-310
5.0e-324 6.92624e-310
julia> for i in 1:2
for j in 1:2
if i >= j
K[i,j] = dot(view(X,:,i),view(X,:,j))
else
K[i,j] = 0.
end
end
end
julia> K
2×2 Array{Float64,2}:
10.0 0.0
14.0 20.0
If you wrap that into a function, for instance:
julia> function computeK(X,n)
K = Matrix{Float64}(undef,n,n)
for i in 1:n
for j in 1:n
if i>=j
K[i,j] = dot(view(X,:,i),view(X,:,j))
else
K[i,j] = 0.
end
end
end
return K
end
computeK (generic function with 1 method)
julia> computeK(X,n)
2×2 Array{Float64,2}:
10.0 0.0
14.0 20.0
That is not worst than that “one-liner”:
julia> f(X,n) = [( i >= j ? dot(view(X,:,i), view(X,:,j)) : 0.0 )::Float64 for i=1:n, j=1:n]
f (generic function with 1 method)
julia> @btime f($X,$n)
75.759 ns (1 allocation: 112 bytes)
2×2 Array{Float64,2}:
10.0 0.0
14.0 20.0
julia> @btime computeK($X,$n)
58.127 ns (1 allocation: 112 bytes)
2×2 Array{Float64,2}:
10.0 0.0
14.0 20.0
Actually, here, it was faster. This is an important difference relative to Python. You are not constrained to writing vectorized code all the time to get good performance.
Concerning the collect() of Python, if that is for garbage collection (as I could find in google), the equivalent is probably GC.gc(), but one does not use that very often, if the code is written carefully to avid unnecessary allocations.