I am pleased to announce that I’ve just registered v2.0 of TensorOperations.jl. New features include

  • an ncon function and @ncon macro for tensor network contractions using an interface that is familiar to a large fraction of the quantum tensor network community. Unlike the existing @tensor macro and friends, it processes the tensor contraction pattern at runtime so that it can cope with dynamical networks or index specifications. The downside is that this breaks type inference and thus results in type unstable code, and slightly lower performance. More information in the docs;

  • support for CuArray objects, by relying on NVidia’s cuTENSOR library, i.e. the functionality that was existing in the experimental package CuTensorOperations.jl. As NVidia just released the official version 1.0 of cuTENSOR, and the required update to CuArrays.jl is currently WIP, this functionality will probably require an update of TensorOperations.jl as well in the near future;

  • under the hood: completely rewrittten macro processing, making it more easy to modify or create new @tensor like macros.