Element-wise product of vector of vectors

I’m moving away from a 3xN matrix representing 3D positions in favour of vectors of static vectors, as suggested in this question, but now I’m a bit stuck with this elementwise product operation for each vector,

using LinearAlgebra
using StaticArrays
v = [SVector{3}([1 2 3]) for i in 1:4]
v.*v

this returns an error, as I understand it because it’s trying to do x * x for each static vector, rather than x .*x. What would be the right syntax?

1 Like

You could just write a loop. What is the context for this elementwise product? If you are doing other operations on the vectors too, it is better to do them all at the same time in a single loop, ideally in-place, rather than allocating a new array with a single multiplication operation per element.

Julia isn’t Matlab—loops are fast, and you don’t have to stretch to wrap everything in vectorized operations.

(That being said, there are many ways to do the operation you requested, e.g. with map((x,y) -> x.*y, x, y).)

1 Like

Thanks! The full context is this operation,

function alpha_rescale(alphadiag::Vector{SVector{3,Complex{Float64}}},
                        sizes::Vector{SVector{3,Float64}})
    # sizes a, b, c are normalised by their sum
    # then alphadiag is rescaled by those factors
     map((x,y) -> x .* y, alphadiag, sizes ./ sum.(sizes))
    
end

# dummy test
sizes = [SVector{3}([1.0 2.0 3.0]) for i in 1:4]
alphadiag = sizes .+ 0.1im
alpha_rescale(alphadiag, sizes)

I’d like to use the dot fusion if possible, as in my original attempt alphadiag .* sizes ./ sum.(sizes), but I don’t see how, unfortunately.

My mild “reservation” against a for loop in this particular context is more for legibility; R has led me to favour map and related functional programming idioms, which I now find more readable when there’s no need to think about the underlying index (comprehensions do help keep the syntax concise, but there’s still a for loop cluttering my thinking when I mentally parse the expression, if that makes sense).

If there’s no alternative I agree that the for loop will probably be more readable,

[alphadiag[i] .* (sizes[i] / sum(sizes[i])) for i in 1:length(sizes)]

I guess another way to phrase the question is whether there is a different alias for .* that would allow,

ewmult = function(x, y) x.*y end
  
function alpha_rescale3(alphadiag::Vector{SVector{3,Complex{Float64}}},
                        sizes::Vector{SVector{3,Float64}})
                        
        ewmult.(alphadiag, sizes ./ sum.(sizes))
end

and perhaps with a binary infix operator syntax.
I assume this fused-dot syntax would be equivalent to the for loop (not that performance is actually important here – the arrays are very small, but more for my own understanding).

You can, in principle, write loops to do this without allocations (if the destination is already allocated) but with nested broadcast, I think it is trickier.

Here is one way to do it:

julia> let ⋆(x, y) = x .* y
           [[1], [2, 3]] .⋆ [[4], [5]]
       end
2-element Array{Array{Int64,1},1}:
 [4]
 [10, 15]

You can also define ⋆(x, y) globally if you need to use it everywhere.

See also: Idiomatic way to write element-wise combination of array of arrays?

You can also use this generalized function for any level of broadcasting:

julia> function rebroadcast(f,n,args...)
           n>1 ? rebroadcast(broadcast,n-1,(f,args...)...) : broadcast(f,args...)
       end
rebroadcast (generic function with 1 method)

julia> rebroadcast(*,2,v,v)
4-element Array{SArray{Tuple{3},Int64,1,3},1}:
 [1, 4, 9]
 [1, 4, 9]
 [1, 4, 9]
 [1, 4, 9]

Then you lose the fusion of nested dot calls.

I suspect that you are focusing too narrowly on one line of your code, and that this is not the “full context”. Even your “dummy test” has two other loops and two other array allocations that could have been combined into your rescaling loop, and I’m guessing that your real application is also doing additional operations on the array.

Not only does this vectorized style allocate a lot of extra temporary arrays, it both hard on the cache and imposes the overhead of loop iteration multiple times rather than once. See: More Dots: Syntactic Loop Fusion in Julia

In general, I would encourage you to move away from having a sequence of operations that do simple transformations on an array one-by-one, and instead try to a perform minimal number of passes over over your data that does as much work as possible in each pass.

3 Likes

You should probably avoid constructing your SVectors like this. Here you first construct a regular array, and then convert it to an SVector. Instead, input each element as its own argument:

SVector(1, 2, 3)

Then you don’t even need to supply type parameters, since the compiler knows it has three elements of type Int.

I think it’s partly an artificial side-effect of posting questions in the form of Minimal Self-Reproducible Examples; if I posted the full code (currently 400 lines with a dozen functions^*) it’d take quite a while to explain and motivate every operation. The dummy example was dummy in the sense the both the creation of both arrays, and the values they hold, are nonsensical (they just have the right types).

Anyway, point taken, thanks, I’ll try to think more in terms of minimising the number of passes through large data, rather than my current mindset of decomposing the problem into simple functions chained together. In my defence it’s often easier to write and debug individual one-step operations, and here the arrays will always be small and the function is only called once in a while (the heavy burden is solving a large dense linear system, so this is rather negligible in comparison – I’m mostly trying to develop good habits).
I can sense that there’ll be a trade-off for me between the functional programming style that I’ve gotten used to, and the requirements of efficiency that will push me to revert to low-level C-like loops. That’s precisely why I’m trying to grasp the dots-fusion construct and see where I can apply it, because it’s a nice bridge between the two without compromise.

With respect to Vectors of SVectors or SMatrices, I’m struggling to construct them or pre-allocate them. When I specify an argument type such as AlphaBlocks::Vector{SMatrix{3,3,Float64}} in the function below,

function propagator_freespace_labframe!(A,
    kn::Float64, R::Vector{SVector{3,Float64}},
    AlphaBlocks)

    N = length(R)

    # nested for loop over N dipoles
    for jj in 1:N

        ind_jj = 3jj-2:3jj

        for kk in (jj+1):N

            ind_kk = 3kk-2:3kk

            rk_to_rj = R[jj] - R[kk]
            rjk = norm(rk_to_rj, 2)
            rjkhat = rk_to_rj / rjk
            rjkrjk =  rjkhat * transpose(rjkhat)

            Ajk::SMatrix = exp(im*kn*rjk) / rjk * (kn*kn*(rjkrjk - I) +
            (im*kn*rjk - 1.0) / (rjk*rjk) * (3*rjkrjk - I))

            # assign blocks
            A[ind_jj, ind_kk] = Ajk * AlphaBlocks[kk]
            A[ind_kk, ind_jj] = transpose(Ajk) * AlphaBlocks[jj]

        end
    end

    return A
end

I have issues generating the appropriate type and julia complains for not finding the right signature.

# initialisation with zeros
AlphaBlocks = [@SMatrix zeros(Complex{Float64},3,3) for ii=1:N_dip]

I believe it’s because the comprehension is generating an SArray rather than a Vector of SMatrices. How can I check that one is a subtype of the other? How should I got about initialising such constructs if a comprehension is a bad way to go?


^* The newly-updated code (with your suggestions, less Matlab-y) is running again so I’ll try to document it and post it on github – since you know computational electromagnetism inside-out, you’ll easily recognise what the code is doing.

Thanks; I have to say the StaticArray constructors are a bit mysterious to me (the syntax is kind of different to normal arrays); I was hoping the conversion from normal arrays occurred without overhead and it was just being “tagged” as Static, so to speak.

What you’re describing sounds like vectorized programming (calling a series of “vector” routines to make small changes to all of the elements of the array, usually producing a sequence of intermediate arrays), not functional programming per se. To me, it has nothing to do with high-level vs. low-level, it’s simply about the order of the loops. For example:

firststep = firstfunction.(X)
secondstep = secondfunction.(firstep)
Y = thirdfunction.(secondstep)

is no more high-level (and arguably less “functional”) than

Y = map(X) do x
    firststep = firstfunction(x)
    secondstep = secondfunction(firststep)
    thirdfunction(secondstep)
end

it’s just that the second version does a sequence of operations on each element of the array X rather doing each operation to the entire array before proceeding to the next operation. For simple element-wise operations like this you can use dot fusion (Y = thirdfunction.(secondfunction.(firstfunction.(X)))) to accomplish the same thing in Julia, but for more complicated operations it can be more convenient to write a loop or use map.

This is also called temporal locality: do as much work as possible on a given bit of data before processing other data.

With Matlab or Numpy, writing code that uses a sequence of “built-in” operations that do simple transformations to an array one by one is heavily preferred, because only “built-in” vector operations are fast. Using this style in Julia is not worse than Matlab or Numpy, but it doesn’t take full advantage of the language, and you don’t want to restrict yourself artificially to writing this sort of code in Julia just because that is what you are used to.

Of course, for non performance-critical code you should typically optimize for convenience, correctness, and generality rather than performance. But it’s good to get out of the habit of writing Matlab-style vectorized code or thinking of non-vectorized code as “low-level”. In particular, a function that simply rescales an array seems me to be erring too much on the side of granularity.

3 Likes

I think you just want AlphaBlocks = zeros(SMatrix{3,3,ComplexF64}, N_dip)

Well, ordinary arrays have special syntax support because they are a core part of the language. It’s a bit unfair to fault a package for not having that. It seems to me that StaticArrays constructors have the same type of syntax as other types from any other package. Maybe they could use some sort of unicode brackets or something for nicer construction. Perhaps you have a suggestion?

That’s exactly what the macro constructors are for: @SVector, @SMatrix and @SArray.

2 Likes