Broadcasting zip

Say I have one column vector x with n elements in it, and one row vector y with m elements in it.
If I want to zip the broadcast of these two vectors, zip(x .+ 0*y, 0*x .+ y), can I do it without generating these temporary n × m arrays?

Can I create z without creating tempx and tempy:

x = rand(3)'
y = rand(5)
tempx = x .+ 0*y
tempy = 0*x .+ y
z = zip(tempx, tempy)
julia> x = randn(10);

julia> y = randn(10);

julia> z1 = zip(x .+ 0*y, 0*x .+ y);

julia> z2 = ((x + 0*y, 0*x + y) for (x,y) in zip(x, y));

julia> collect(z1) == collect(z2)
true

See the MWE above, I think I/you might have misunderstood something.

x = rand(3)'; 
y = rand(5);
z = broadcast((x,y)->(x,y), x, y);
z2 = zip(z);

Sorry, I missed the '. However, I still don’t get the example, because I don’t understand why you are not using just

tuple.(x, y)
1 Like

Well, aren’t you fancy! :grin:

OK, I’ll come clean, I have 3 things:

  1. a matrix of weights
  2. a range indicating the rows of that matrix
  3. a range indicating the columns of that matrix

I want to calculate the weighted mean of the locations, and I thought I’d do it in the most efficient way possible. The ideas here lead to this form:

rows = 3:5 # in reality the locations of the rows can also be `linspace(Float64, Float64, n)` 
cols = -2:3
w = rand(length(rows), length(cols)) # I don't actually generate this matrix like that
S = sum(w)
r = sum(w[i]*rows[i[1]] for i in CartesianRange(indices(w)))/S
c = sum(w[i]*cols[i[1]] for i in CartesianRange(indices(w)))/S

These kind of calculations should be

  1. very fast anyway for small matrices,
  2. depend on the memory layout for large matrices.

If this is not a bottleneck worth optimizing (my standard assumption these days), I would go for readability. I think that StatsBase already has a method for weighted means, which may work for you. Otherwise clever use of broadcasting gives you one-liners.

If this is a bottleneck, I would consider transposing for efficient traversal of rows, and benchmark.

Thanks for the perspective. I’ve been suffering from an acute case of premature over optimizing lately…