This is arguably a slightly strange choice, maybe the only function in LinearAlgebra which regards arrays as a bag of numbers, ignoring their structure. And begin recursive seems the wrong behaviour on e.g. vectors of matrices, to me (and wasn’t seriously discussed when added?). Nevertheless, it’s too late to change this, and least confusing if the entire ecosystem agrees.
That seems fine to me. It’s typed \boxdot<tab>\_2<tab>
. We could add this to TensorCore.jl too… like the single contraction it’s always reshape-then-*
, i.e. it allows rand(2,3,4) ⊡₂ rand(3,4,5)
but not rand(2,3,4) ⊡₂ rand(4,3,5)
:
julia> x, y = Tensorial.@Tensor(rand(2,3,4)), Tensorial.@Tensor(rand(3,4,5));
julia> Tensorial.contract2(x, y)
2×5 Mat{2, 5, Float64, 10}:
2.38424 1.98448 4.23119 2.87129 2.06991
2.73175 2.26099 4.28346 3.05636 2.02472
julia> ans ≈ reshape(x, 2,:) * reshape(y, :,5)
true
Maybe worth noting that Enzyme appears to just work, by understanding that these are just tuples, without ever calling similar
. DI seems able to use that:
julia> Base.similar(_::typeof(Vec(5.0,6.0)), ::Type{Float64}, ::Base.Dims) = error("similar not defined")
julia> Enzyme.gradient(Reverse, v -> dot(v, Mat{2,2}(1,2,3,4), v), Vec(5.0,6.0))[1]
2-element Vec{2, Float64}:
40.0
73.0
julia> DifferentiationInterface.gradient(v -> dot(v, Mat{2,2}(1,2,3,4), v), AutoEnzyme(), Vec(5.0,6.0))
2-element Vec{2, Float64}:
40.0
73.0