"Tensor LLVM Extensions ..." proposal + Julia example in the design document

“Tensor LLVM Extensions Proposed For Targeting AI Accelerators, Emerging Hardware”

( via/context: Tensor LLVM Extensions Proposed For Targeting AI Accelerators, Emerging Hardware - Phoronix 14 November 2021 )


And a Julia example inside in the design document !!! ( big google doc )

Lowering Example Using Julia

Julia example using Knet.jl:
; Tensors in Knet are initialized using a reshape operation. The first argument to this function is a one-dimensional array with the data for the tensor (in this example, 1.0:6.0 means that the array is filled with values between 1 and 6, and the second argument is the shape of the tensor produced by the reshape function.

w = reshape([1.0:6.0...], (1,1,3,2))
x = reshape([7.0:10.0...], (1,2,2))
y = conv4(w, x; padding=(1,2);stride=3)

The above convolution operation in Julia will be lowered down to LLVM IR to produce the following

 %w = call token llvm.tensor.typeinfo(<6 x i32> <i32 1, i32 2, i32 3, i32 4, i32 5, i32 6>, <4 x i32> <i32 1, i32 1, i32 3, i32 2>, <4 x i32> <i32 0, i32 1, i32 2, i32 3>, <4 x i32> <i32 0, i32 0, i32 0, i32 0>)
 %x = call token llvm.tensor.typeinfo(<4 x i32> <i32 7, i32 8, i32 9, i32 10>, <3 x i32> <i32 1, i32 2, i32 2>, <3 x i32> <i32 0, i32 1, i32 2>, <3 x i32> <i32 0, i32 0, i32 0>)
 
%pad_w = call <28 x i32> llvm.tensor.pad(token %w, i32 0, <3 x i32> <i32 0, i32 1, i32 2>, <3 x i32> <i32 0, i32 1, i32 2>, <3 x i32> <i32 0, i32 0, i32 0>)
%padded_w = call token llvm.tensor.typeinfo(<28 x i32> %pad_w, <4 x i32> <i32 1, i32 1, i32 4, i32 4>, <4 x i32> <i32 0, i32 1, i32 2, i32 3>, <4 x i32> <i32 0, i32 0, i32 2, i32 4>)
 
<vector_ty> llvm.tensor.convolution(token %padded_w, token %x, <4 x i32> <1, 1, 1, 3>, <4 x i32> <i32 0, i32 0, i32 0, i32 0>, <4 x i32> <i32 0, i32 0, i32 0, i32 0>)
6 Likes