ANN: TensorSlice.jl → TensorCast.jl

This package is now registered under the name TensorCast.jl, because it suffered further scope creep to include arbitrary broadcasting.

You can do things like this, the word lazy means it uses LazyArrays.jl to avoid materialising the whole 3-array RHS:

using Statistics
x = randn(100); m = rand(1:99, 100);

@reduce y[k] := mean(i,j)  m[k] * exp(-(x[i]-x[j])^2)  lazy

And alla xiaodai’s question, here’s an efficient way to stack 60k matrices, broadcast an operation, and re-slice them into 60 batches of 1000:

using Flux
imgs = Flux.Data.MNIST.images(); # 60_000 matrixes each 28x28

@cast batches[b][i,j,n] |= 1 - float(imgs[n\b][i,j])  n:1000;

The stacking operation uses the optimised reduce(hcat,...), plus appropriate reshaping. Using |= instead of := means (by another abuse of notation) that the sliced batches are copied Arrays, not views.

6 Likes