TensorOperations.jl v4

Together with @lkdvos and @maarten_van_damme , I am pleased to announce version 4 of TensorOperations.jl.

As a user, the main functionality provided @tensor remains mostly unchanged, but there are quite a few exciting but breaking features, and the code has undergone a major restructuring and rewrite. The most exciting novel feature for users is the built-in presence of AD rules (as a package extension to ChainRulesCore.jl). The CuArray support has also been moved to a package extension, so that CUDA.jl (actually cuTENSOR.jl) has become only a weak dependence (on Julia >= 1.9).

The @tensor macro now comes with a set of keyword arguments that can help with debugging the otherwise cryptic error messages when applying it to invalid tensor operations.

Another exciting new feature, through the keyword arguments. is a general mechanism for selecting and implementing different backends. This allows to easily select and test different implementation strategies, such as e.g. GitHub - lkdvos/TensorOperationsTBLIS.jl: TBLIS wrapper for TensorOperations.jl , which will be registered soon. Other implementations based on e.g. LoopVectorization.jl, Gajus.jl, KernelAbstractions.jl, GEMMKernels.jl, … can in time be added.

While we have removed the cache for temporary objects due to inconsistent performance, especially in multithreaded environments, we have similarly exposed a general allocation mechanism to facilitate experimenting with new strategies for managing the allocation (and freeing) of temporary objects in the calculation.

For developers of array (or more general tensor-type) packages, we have modified and clearly documented the necessary interface that needs to be implemented in order to use custom tensor types with TensorOperations.jl. Overall, the documentation has been updated and extended.

While most of the new features have been extensively tested, we certainly look forward to quickly resolve any potential new issues that might arise.

11 Likes