# Vectorization over subset of indices

I would like to vectorize some operation such as
a .= b .+ c
but only a subset of indices. For example, if they are both 4D arrays, I would like to vectorize only over dimensions 1 and 3. Is this possible,perhaps with views and other methods? Thanks!

1 Like
``````@views @. a[:,:,:,i] = b[:,:,:,i] + c[:,:,:,i]
``````
1 Like

Yes, but means you preallocated βaβ and then you are mutatng, and Zygote will not work. But you did answer the question. I am also looking at TensorCast for ideas. I have a feeling that my problem can be solved at the cost of many allocatiins, which seems to be a consequence of non-mutation.

Look at using:

1 Like

Thanks! Will take a look.

Do you sleep?

3 Likes

No

3 Likes

Here is my solution, @ChrisRackauckas, using `@tullio, which looks great! I could not find where differentiation rules were defined in `Chainrules`, or perhaps they donβt have to be given only built in derivations are used?

``````using Zygote  # use before Tulio (see docs)
using Tullio

function test(N, degree)
x = rand(N)
y = rand(N)

NT = NamedTuple{(:x,:y), Tuple{Int64, Int64}}
I = Vector{NT}()
for i in 0:degree
for j in 0:degree
if i + j < (degree+1)
push!(I, (x=i, y=j))
end
end
end

coef = rand(length(I))

return Poly, coef
end

N = 10_000
degree =20
@time poly, coef = test(N, degree);
@time poly(coef)
loss = x -> sum(poly(x).^2)
println(length(coef))
``````

For `N=10_000` and degree 20 (231 terms in the polynomial), the gradient takes 0.02 sec on my MacBook Pro M1. Seems fast, but I do not have a sense of what is possible. The test functions takes 0.0002 seconds. My own problem requires degree 4 and about 200 points, which gives a Zygote timing of 0.0001.

Yes, I know I should use the benchmarking tools, shower, @time, when run 3-5 times in a row, is more than sufficient to provide an estimate, since I am interested only in orders of magnitude.

Cheers,

Rules are already defined for Tullio:

After all my changes, I still get the mutation error when taking derivatives with Zygote.
More precisely,

``````function main(tstate::Lux.Training.TrainState, vjp::Lux.Training.AbstractVJP, data::Tuple, epochs::Int)
#data = data .|> gpu
# no batches?
for epoch in 1:epochs
# Try computing gradient of loss
println(data)
@info epoch=epoch loss=loss
end
return tstate
end
``````

I get a mutation error when calling `Lux.Training.compute_gradients(tstate, grads)`.
So the question I have is: would you expect this to fail or succeed similarly to a direct call to `Zygote.gradient(...)`? I ask before the later call works. I will spend this afternoon on this.

Cheers,