From my vague knowledge of python, I think its a
will actually be a solid array, not an array of arrays. (Or, something more like Julia’s Matrix, than like Vector{Vector}.) This will be more efficient than making arrays of small arrays, unless (as suggested) you use StaticArrays.
julia> a = [[0,0], [0,1], [0, 1]];
julia> b = [[0,0], [0,1], [1, 0], [1, 0]];
julia> as = reduce(hcat, a); bs = reduce(hcat, b)
2×4 Matrix{Int64}:
0 0 1 1
0 1 0 0
julia> sqrt.(dropdims(sum((as .- reshape(bs, size(bs,1), 1, size(bs,2))).^2, dims=1), dims=1))
3×4 Matrix{Float64}:
0.0 1.0 1.0 1.0
1.0 0.0 1.41421 1.41421
1.0 0.0 1.41421 1.41421
# other tricks for how to write this:
julia> const newaxis = [CartesianIndex{0}()];
julia> view(bs, :, newaxis, :) = reshape(bs, size(bs,1), 1, size(bs,2))
true
julia> using TensorCast
julia> sqrt.(@reduce tmp[i,j] := sum(x) (as[x,i] - bs[x,j])^2)
3×4 Matrix{Float64}:
0.0 1.0 1.0 1.0
1.0 0.0 1.41421 1.41421
1.0 0.0 1.41421 1.41421
Is this true? (I mean as a real question, not snark!) Julia’s broadcasting will always (at present) materialise the big array, before reducing (i.e. the 2×3×4 Array{Int64, 3}
which is the argument of sum
.) But things like reshape or view(bs, :, newaxis, :)
will not not make a copy.