To convert vector of different dimension-vectors to matrix

a
3-element Vector{Vector{Float64}}:
[1.0]
[1.0, -1.0]
[1.0, -2.0, 0.5]

ı want to obtain

[1.0, 0.0, 0.0;
1.0, -1.0, 0.0;
1.0, -2.0, 0.5]

How can ı do?

function stack_ragged(vs)
    ncol, T = mapreduce(v -> (length(v), eltype(v)),
                        ((n1, T1), (n2, T2)) -> (max(n1, n2), promote_type(T1, T2)),
                        vs)
    nrow = length(vs)
    result = zeros(T, nrow, ncol)
    for (row, v) in zip(1:nrow, vs)
        @view(result[row, 1:length(v)]) .= v
    end
    result
end

vs = [[1.0], [1.0, -1.0], [1.0, -2.0, 0.5]]

stack_ragged(vs)
2 Likes

This would also work:

julia> A = [[1.0],
       [1.0, -1.0],
       [1.0, -2.0, 0.5]];

julia> [get(A[i],j,0.0) for i=1:length(A), j=1:maximum(length.(A))]
3×3 Matrix{Float64}:
 1.0   0.0  0.0
 1.0  -1.0  0.0
 1.0  -2.0  0.5

0.0 can be replaced by zero(eltype(A[1]))

1 Like

Thank you.

Actually
I have
a =
[1.0]
[1.0, -1.0]
[1.0, -2.0, 0.5]

u = [ u11 u12 u13;
u21 u22, u23;
u31 u 32 u33]

ı want to calculate

result = [ a11 * u11 , a11 * u12, a11 * u13;
a21 * u11 + a22 * u21 , a21 * u12 + a22 * u22, a21 * u13 + a22 * u23;
…]

how can I do?

In other words, you want the product of a lower-triangular matrix with a dense matrix. You could store A in a LowerTriangular matrix data structure, and U in an ordinary matrix, and then just do A * U.

Your code result = [ ... ] is also perfectly fine, of course, if you replace a11 with a[1][1] and u11 with u[1,1] etcetera, but then it only works for the 3x3 case.

Of course, if you are only concerned with the 3x3 case, and need to do this product a huge number of times and performance is really critical, you could do better by inlining the 3x3 calculation using something like StaticArrays.jl plus some extra specialization for static lower-triangular matrices, but I would only go to a lot of trouble like that if profiling shows it really matters to you. (After all, you can just store your A as an ordinary 3x3 SMatrix and it will be within a factor of 2 of optimal.)

But it’s not clear what you are going for, here. e.g. why are you storing a lower-triangular matrix as a vector-of-vectors in the first place instead of using a more convenient matrix data type? Are you optimizing for convenience, speed, storage, or what, and why?

2 Likes

Thank you so much, yes I know it is not very effective for optimization (I will use it in optimization), but your answer is enough for me, for now. Thank you

Just for variety.

AB=zeros(Float64, axes(B))
for j in eachindex(A)
	for i in eachindex(A[j])
		AB[j, :] .+= A[j][i] .* B[i,:]
	end
end
1 Like

some comparisons between some variants

A = [[1.0],
       [1.0, -1.0],
       [1.0, -2.0, 0.5]]

B=rand(-3:5,3,5000)

function forloop(A,B)
	AB=zeros(Float64, axes(B))
	for j in eachindex(A)
		for i in eachindex(A[j])
			AB[j,:] .+= (@view  A[j][i]) .* (@view B[i,:])
		end
	end
	AB
end


function forloop1(A,B)
	AB=zeros(Float64, axes(B))
	for j in eachindex(A)
		for i in eachindex(A[j])
			for k in axes(B,2)
				AB[j,k] += (@view  A[j][i]) .* (@view B[i,k])
			end
		end
	end
	AB
end

function forloop2(A,B)
	AB=zeros(Float64, axes(B))
	for j in eachindex(A)
			for k in axes(B,2)
					for  i in eachindex(A[j])      
							AB[j,k] += (@view  A[j][i]) .* (@view B[i,k])
					end
			end
	end
	AB
   end

julia> @btime forloop(A,B)
  51.900 μs (14 allocations: 351.89 KiB)
3×5000 Matrix{Float64}:
 -2.0  -2.0  -1.0   2.0  …  -3.0   4.0  -1.0  -3.0     
  0.0   1.0  -5.0  -2.0      0.0   7.0   0.0  -6.0     
  3.0   5.0  -8.5  -6.0      5.0  10.0   3.0  -6.5     

julia> @btime forloop1(A,B)
  20.100 μs (2 allocations: 117.23 KiB)
3×5000 Matrix{Float64}:
 -2.0  -2.0  -1.0   2.0  …  -3.0   4.0  -1.0  -3.0
  0.0   1.0  -5.0  -2.0      0.0   7.0   0.0  -6.0     
  3.0   5.0  -8.5  -6.0      5.0  10.0   3.0  -6.5     

julia> @btime forloop2(A,B)
  51.700 μs (2 allocations: 117.23 KiB)
3×5000 Matrix{Float64}:
 -2.0  -2.0  -1.0   2.0  …  -3.0   4.0  -1.0  -3.0
  0.0   1.0  -5.0  -2.0      0.0   7.0   0.0  -6.0     
  3.0   5.0  -8.5  -6.0      5.0  10.0   3.0  -6.5     

Notice that forloop2() allocates less but performs the same as forloop(); forloop1() allocates like forllop2() but performs much better.
I have a vague idea why you get these results. Maybe someone has the patience to explain how things are going