Failure to vectorize 8 Int64 multiplies when 8 Float64 multiplies vectorize

I’ve come across this interesting observation in my package, CliffordNumbers.jl. For context, the multiplication done here is a geometric product (relevant code here) implemented as a grid multiply between blade coefficients (elements), and can be neatly implemented as a series of permutes, vectorized multiplies, and vectorized adds in principle. The CliffordNumbers{VGA(3),T} instances are backed by an NTuple{8,T}.

using CliffordNumbers, BenchmarkTools

x = CliffordNumber{VGA(3), Int64}(0, 4, 2, 0, 0, 0, 0, 0)
y = CliffordNumber{VGA(3), Int64}(0, 0, 0, 0, 0, 6, 9, 0)
# Convert the scalar entries to Float64
xx = scalar_convert(Float64, x)
yy = scalar_convert(Float64, y)

If I benchmark each of these multiplications, I get significantly different results:

julia> @benchmark $x * $y
BenchmarkTools.Trial: 10000 samples with 998 evaluations.
 Range (min … max):  16.999 ns … 400.718 ns  β”Š GC (min … max): 0.00% … 0.00%
 Time  (median):     17.777 ns               β”Š GC (median):    0.00%
 Time  (mean Β± Οƒ):   23.070 ns Β±  15.576 ns  β”Š GC (mean Β± Οƒ):  0.00% Β± 0.00%

  β–ˆβ–…β–ƒβ–ƒβ–β–β–β–β–‚β–‚β–β–β–‚  ▁                                             ▁
  β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–†β–ˆβ–‡β–‡β–†β–‡β–‡β–‡β–‡β–‡β–‡β–‡β–ˆβ–‡β–ˆβ–ˆβ–ˆβ–‡β–‡β–‡β–‡β–‡β–ˆβ–†β–†β–…β–„β–…β–…β–…β–„β–ƒβ–„β–„β–„β–β–„β–„β–…β–…β–…β–…β–†β–†β–…β–… β–ˆ
  17 ns         Histogram: log(frequency) by time      96.5 ns <

 Memory estimate: 0 bytes, allocs estimate: 0.

julia> @benchmark $xx * $yy
BenchmarkTools.Trial: 10000 samples with 1000 evaluations.
 Range (min … max):  6.087 ns … 111.092 ns  β”Š GC (min … max): 0.00% … 0.00%
 Time  (median):     6.191 ns               β”Š GC (median):    0.00%
 Time  (mean Β± Οƒ):   8.054 ns Β±   4.577 ns  β”Š GC (mean Β± Οƒ):  0.00% Β± 0.00%

  β–ˆβ–ƒβ–ƒβ–ƒ β–„   β–ƒβ–ƒβ–ƒβ–‚   β–…  ▁          ▁                             ▁
  β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–…β–‡β–ˆβ–ˆβ–ˆβ–ˆβ–‡β–„β–…β–ˆβ–ˆβ–„β–ˆβ–‡β–…β–„β–ƒβ–†β–†β–…β–…β–…β–†β–ˆβ–…β–„β–…β–…β–ƒβ–ƒβ–‚β–ƒβ–„β–†β–‚β–‚β–ƒβ–„β–„β–ƒβ–„β–…β–…β–„β–‚β–‚β–ƒβ–ƒβ–ƒβ–ƒβ–ƒβ–„ β–ˆ
  6.09 ns      Histogram: log(frequency) by time      25.5 ns <

 Memory estimate: 0 bytes, allocs estimate: 0.

I thought it was very weird that a) there was such a discrepancy, and b) that the Float64 case was much faster than the Int64 case. Looking at the machine code, I found the Int64 case fails to vectorize, even when the -O3 flag is used:

        .text
        .file   "*"
        .globl  "julia_*_3117"                  # -- Begin function julia_*_3117
        .p2align        4, 0x90
        .type   "julia_*_3117",@function
"julia_*_3117":                         # @"julia_*_3117"
# %bb.0:                                # %top
        push    rbp
        mov     rbp, rsp
        push    r15
        push    r14
        push    r13
        push    r12
        push    rbx
        sub     rsp, 80
        mov     r10, rdx
        mov     rdx, rsi
        mov     qword ptr [rbp - 248], rdi      # 8-byte Spill
        mov     r11, qword ptr [r10 + 48]
        mov     r9, qword ptr [r10 + 40]
        mov     r8, qword ptr [r10 + 32]
        mov     rdi, qword ptr [r10 + 16]
        mov     r12, qword ptr [r10 + 8]
        mov     rsi, qword ptr [rsi + 32]
        mov     rcx, qword ptr [rdx + 24]
        mov     rax, r11
        mov     qword ptr [rbp - 136], rsi      # 8-byte Spill
        imul    rax, rsi
        mov     rbx, r12
        imul    rbx, rcx
        add     rbx, rax
        mov     qword ptr [rbp - 56], rbx       # 8-byte Spill
        mov     rax, rdi
        imul    rax, rsi
        mov     rsi, r9
        imul    rsi, rcx
        add     rsi, rax
        mov     qword ptr [rbp - 240], rsi      # 8-byte Spill
        mov     rax, rcx
        mov     rbx, rcx
        imul    rax, rdi
        mov     r15, rdi
        mov     qword ptr [rbp - 48], rdi       # 8-byte Spill
        mov     r14, qword ptr [rdx + 40]
        mov     rsi, r8
        mov     qword ptr [rbp - 64], r8        # 8-byte Spill
        imul    rsi, r14
        add     rsi, rax
        mov     qword ptr [rbp - 120], rsi      # 8-byte Spill
        mov     rcx, qword ptr [rdx + 16]
        mov     qword ptr [rbp - 96], rcx       # 8-byte Spill
        mov     rax, r12
        imul    rax, rcx
        mov     rdi, r11
        imul    rdi, r14
        add     rdi, rax
        mov     qword ptr [rbp - 88], rdi       # 8-byte Spill
        mov     rax, r11
        imul    rax, rbx
        mov     rdi, rbx
        mov     qword ptr [rbp - 192], rbx      # 8-byte Spill
        mov     r13, qword ptr [r10]
        mov     rbx, r13
        imul    rbx, r14
        add     rbx, rax
        mov     qword ptr [rbp - 224], rbx      # 8-byte Spill
        mov     rbx, r9
        mov     rax, r9
        imul    rax, rcx
        mov     rsi, r15
        imul    rsi, r14
        add     rsi, rax
        mov     qword ptr [rbp - 232], rsi      # 8-byte Spill
        mov     rax, qword ptr [rdx + 56]
        mov     qword ptr [rbp - 104], rax      # 8-byte Spill
        mov     r15, qword ptr [r10 + 56]
        mov     r9, r15
        imul    r9, rax
        mov     rcx, qword ptr [rdx + 48]
        mov     qword ptr [rbp - 152], rcx      # 8-byte Spill
        mov     rax, r11
        imul    rax, rcx
        add     rax, r9
        mov     r9, rbx
        mov     rcx, rbx
        imul    r9, r14
        add     r9, rax
        mov     r8, qword ptr [r10 + 24]
        mov     r10, rdi
        imul    r10, r8
        add     r10, r9
        mov     rsi, qword ptr [rbp - 64]       # 8-byte Reload
        mov     r9, rsi
        mov     rbx, qword ptr [rbp - 136]      # 8-byte Reload
        imul    r9, rbx
        sub     r9, r10
        mov     rax, qword ptr [rbp - 48]       # 8-byte Reload
        mov     rdi, rax
        imul    rdi, qword ptr [rbp - 96]       # 8-byte Folded Reload
        add     r9, rdi
        mov     r10, qword ptr [rdx + 8]
        mov     rdi, r12
        imul    rdi, r10
        add     r9, rdi
        mov     rdx, qword ptr [rdx]
        mov     qword ptr [rbp - 128], rdx      # 8-byte Spill
        mov     rdi, r13
        imul    rdi, rdx
        add     r9, rdi
        mov     rdi, r13
        imul    rdi, r10
        mov     rdx, r8
        imul    rdx, r10
        mov     qword ptr [rbp - 72], rdx       # 8-byte Spill
        mov     rdx, rax
        imul    rdx, r10
        mov     qword ptr [rbp - 160], rdx      # 8-byte Spill
        mov     rax, rcx
        imul    rax, r10
        mov     qword ptr [rbp - 80], rax       # 8-byte Spill
        mov     rax, rsi
        imul    rax, r10
        mov     qword ptr [rbp - 200], rax      # 8-byte Spill
        mov     rax, r15
        imul    rax, r10
        mov     qword ptr [rbp - 216], rax      # 8-byte Spill
        imul    r10, r11
        mov     qword ptr [rbp - 208], r10      # 8-byte Spill
        mov     qword ptr [rbp - 112], r11      # 8-byte Spill
        mov     qword ptr [rbp - 184], r11      # 8-byte Spill
        imul    r11, qword ptr [rbp - 104]      # 8-byte Folded Reload
        mov     rdx, r15
        mov     r10, qword ptr [rbp - 152]      # 8-byte Reload
        imul    rdx, r10
        add     rdx, r11
        mov     r11, rcx
        imul    r11, rbx
        add     r11, rdx
        mov     rdx, r8
        mov     rsi, qword ptr [rbp - 96]       # 8-byte Reload
        imul    rdx, rsi
        add     rdx, r11
        mov     r11, qword ptr [rbp - 120]      # 8-byte Reload
        sub     r11, rdx
        add     r11, rdi
        mov     rdx, r12
        mov     rax, qword ptr [rbp - 128]      # 8-byte Reload
        imul    rdx, rax
        add     r11, rdx
        mov     qword ptr [rbp - 120], r11      # 8-byte Spill
        mov     rdx, qword ptr [rbp - 64]       # 8-byte Reload
        imul    rdx, r10
        mov     rdi, rcx
        mov     qword ptr [rbp - 176], rcx      # 8-byte Spill
        mov     r11, qword ptr [rbp - 104]      # 8-byte Reload
        imul    rdi, r11
        add     rdi, rdx
        mov     rdx, r15
        imul    rdx, r14
        add     rdi, rdx
        sub     rdi, qword ptr [rbp - 56]       # 8-byte Folded Reload
        mov     rdx, r13
        imul    rdx, rsi
        add     rdi, rdx
        add     rdi, qword ptr [rbp - 72]       # 8-byte Folded Reload
        mov     rsi, qword ptr [rbp - 48]       # 8-byte Reload
        mov     rdx, rsi
        imul    rdx, rax
        add     rdi, rdx
        mov     qword ptr [rbp - 72], rdi       # 8-byte Spill
        imul    rcx, r10
        mov     rax, qword ptr [rbp - 64]       # 8-byte Reload
        mov     qword ptr [rbp - 144], rax      # 8-byte Spill
        mov     qword ptr [rbp - 168], rax      # 8-byte Spill
        mov     qword ptr [rbp - 56], rax       # 8-byte Spill
        mov     rdx, r11
        imul    rax, r11
        add     rax, rcx
        mov     rdi, r15
        imul    rdi, rbx
        add     rax, rdi
        mov     rdi, r13
        mov     r11, qword ptr [rbp - 192]      # 8-byte Reload
        imul    rdi, r11
        add     rax, rdi
        sub     rax, qword ptr [rbp - 88]       # 8-byte Folded Reload
        add     rax, qword ptr [rbp - 160]      # 8-byte Folded Reload
        mov     rdi, r8
        mov     rbx, qword ptr [rbp - 128]      # 8-byte Reload
        imul    rdi, rbx
        add     rax, rdi
        mov     qword ptr [rbp - 64], rax       # 8-byte Spill
        mov     rdi, r8
        imul    rdi, rdx
        imul    rsi, r10
        add     rsi, rdi
        mov     rax, r8
        imul    rax, r14
        mov     qword ptr [rbp - 88], rax       # 8-byte Spill
        imul    r14, r12
        add     r14, rsi
        mov     rdx, qword ptr [rbp - 56]       # 8-byte Reload
        imul    rdx, r11
        mov     qword ptr [rbp - 56], rdx       # 8-byte Spill
        imul    r11, r15
        add     r11, r14
        mov     rcx, r13
        mov     rax, qword ptr [rbp - 136]      # 8-byte Reload
        imul    rcx, rax
        sub     rcx, r11
        mov     rdx, qword ptr [rbp - 184]      # 8-byte Reload
        mov     r11, qword ptr [rbp - 96]       # 8-byte Reload
        imul    rdx, r11
        add     rcx, rdx
        add     rcx, qword ptr [rbp - 80]       # 8-byte Folded Reload
        mov     rdx, qword ptr [rbp - 144]      # 8-byte Reload
        imul    rdx, rbx
        add     rcx, rdx
        mov     rdx, r13
        imul    rdx, r10
        mov     rsi, r12
        imul    rsi, r10
        mov     qword ptr [rbp - 80], rsi       # 8-byte Spill
        mov     rsi, r8
        imul    r8, r10
        mov     rdi, qword ptr [rbp - 48]       # 8-byte Reload
        mov     r14, qword ptr [rbp - 104]      # 8-byte Reload
        imul    rdi, r14
        add     r8, rdi
        imul    rsi, rax
        mov     qword ptr [rbp - 48], rsi       # 8-byte Spill
        imul    rax, r12
        add     rax, r8
        mov     rdi, qword ptr [rbp - 176]      # 8-byte Reload
        imul    rdi, rbx
        mov     rsi, qword ptr [rbp - 112]      # 8-byte Reload
        imul    rsi, rbx
        mov     qword ptr [rbp - 112], rsi      # 8-byte Spill
        imul    rbx, r15
        mov     r10, qword ptr [rbp - 168]      # 8-byte Reload
        imul    r10, r11
        imul    r15, r11
        add     r15, rax
        mov     rsi, qword ptr [rbp - 224]      # 8-byte Reload
        sub     rsi, r15
        add     rsi, qword ptr [rbp - 200]      # 8-byte Folded Reload
        add     rsi, rdi
        mov     rax, r14
        imul    r12, r14
        add     r12, rdx
        add     r12, qword ptr [rbp - 88]       # 8-byte Folded Reload
        sub     r12, qword ptr [rbp - 240]      # 8-byte Folded Reload
        add     r12, r10
        add     r12, qword ptr [rbp - 216]      # 8-byte Folded Reload
        add     r12, qword ptr [rbp - 112]      # 8-byte Folded Reload
        imul    r13, rax
        add     r13, qword ptr [rbp - 80]       # 8-byte Folded Reload
        add     r13, qword ptr [rbp - 48]       # 8-byte Folded Reload
        add     r13, qword ptr [rbp - 56]       # 8-byte Folded Reload
        sub     r13, qword ptr [rbp - 232]      # 8-byte Folded Reload
        add     r13, qword ptr [rbp - 208]      # 8-byte Folded Reload
        add     r13, rbx
        mov     rax, qword ptr [rbp - 248]      # 8-byte Reload
        mov     qword ptr [rax], r9
        mov     rdx, qword ptr [rbp - 120]      # 8-byte Reload
        mov     qword ptr [rax + 8], rdx
        mov     rdx, qword ptr [rbp - 72]       # 8-byte Reload
        mov     qword ptr [rax + 16], rdx
        mov     rdx, qword ptr [rbp - 64]       # 8-byte Reload
        mov     qword ptr [rax + 24], rdx
        mov     qword ptr [rax + 32], rcx
        mov     qword ptr [rax + 40], rsi
        mov     qword ptr [rax + 48], r12
        mov     qword ptr [rax + 56], r13
        add     rsp, 80
        pop     rbx
        pop     r12
        pop     r13
        pop     r14
        pop     r15
        pop     rbp
        ret
.Lfunc_end0:
        .size   "julia_*_3117", .Lfunc_end0-"julia_*_3117"
                                        # -- End function
        .section        ".note.GNU-stack","",@progbits

The Float64 case vectorizes as expected:

        .text
        .file   "*"
        .globl  "julia_*_3119"                  # -- Begin function julia_*_3119
        .p2align        4, 0x90
        .type   "julia_*_3119",@function
"julia_*_3119":                         # @"julia_*_3119"
# %bb.0:                                # %top
        push    rbp
        mov     rbp, rsp
        vmovupd ymm6, ymmword ptr [rdx]
        mov     rax, rdi
        vmovupd ymm0, ymmword ptr [rdx + 32]
        vbroadcastsd    ymm2, qword ptr [rsi]
        vxorpd  xmm19, xmm19, xmm19
        vpermilpd       ymm1, ymm6, 5           # ymm1 = ymm6[1,0,3,2]
        vpermpd ymm4, ymm6, 78                  # ymm4 = ymm6[2,3,0,1]
        vpermpd ymm5, ymm6, 27                  # ymm5 = ymm6[3,2,1,0]
        vbroadcastsd    ymm7, qword ptr [rsi + 32]
        vmulpd  ymm8, ymm7, ymm6
        vfmadd213pd     ymm6, ymm2, ymm19       # ymm6 = (ymm2 * ymm6) + ymm19
        vbroadcastsd    ymm9, qword ptr [rsi + 8]
        vmulpd  ymm10, ymm9, ymm1
        vbroadcastsd    ymm11, qword ptr [rsi + 16]
        vaddpd  ymm6, ymm10, ymm6
        vmulpd  ymm10, ymm11, ymm4
        vaddpd  ymm12, ymm10, ymm6
        vsubpd  ymm6, ymm6, ymm10
        vbroadcastsd    ymm10, qword ptr [rsi + 24]
        vmulpd  ymm13, ymm10, ymm5
        vsubpd  ymm12, ymm12, ymm13
        vaddpd  ymm6, ymm13, ymm6
        vblendpd        ymm6, ymm12, ymm6, 10           # ymm6 = ymm12[0],ymm6[1],ymm12[2],ymm6[3]
        vmulpd  ymm7, ymm7, ymm0
        vaddpd  ymm12, ymm6, ymm7
        vsubpd  ymm6, ymm6, ymm7
        vbroadcastsd    ymm7, qword ptr [rsi + 40]
        vpermilpd       ymm13, ymm0, 5          # ymm13 = ymm0[1,0,3,2]
        vmulpd  ymm14, ymm13, ymm7
        vsubpd  ymm12, ymm12, ymm14
        vaddpd  ymm6, ymm14, ymm6
        vblendpd        ymm6, ymm12, ymm6, 6            # ymm6 = ymm12[0],ymm6[1,2],ymm12[3]
        vbroadcastsd    ymm12, qword ptr [rsi + 48]
        vpermpd ymm14, ymm0, 78                 # ymm14 = ymm0[2,3,0,1]
        vmulpd  ymm15, ymm12, ymm14
        vsubpd  ymm16, ymm6, ymm15
        vaddpd  ymm6, ymm15, ymm6
        vbroadcastsd    ymm15, qword ptr [rsi + 56]
        vpermpd ymm17, ymm0, 27                 # ymm17 = ymm0[3,2,1,0]
        vmulpd  ymm18, ymm15, ymm17
        vsubpd  ymm3, ymm16, ymm18
        vaddpd  ymm6, ymm6, ymm18
        vblendpd        ymm3, ymm3, ymm6, 12            # ymm3 = ymm3[0,1],ymm6[2,3]
        vmovupd ymmword ptr [rdi], ymm3
        vfmadd213pd     ymm0, ymm2, ymm19       # ymm0 = (ymm2 * ymm0) + ymm19
        vmulpd  ymm2, ymm9, ymm13
        vaddpd  ymm0, ymm2, ymm0
        vmulpd  ymm2, ymm11, ymm14
        vaddpd  ymm3, ymm0, ymm2
        vsubpd  ymm0, ymm0, ymm2
        vmulpd  ymm2, ymm10, ymm17
        vsubpd  ymm3, ymm3, ymm2
        vaddpd  ymm0, ymm0, ymm2
        vblendpd        ymm0, ymm3, ymm0, 10            # ymm0 = ymm3[0],ymm0[1],ymm3[2],ymm0[3]
        vaddpd  ymm2, ymm8, ymm0
        vsubpd  ymm0, ymm0, ymm8
        vmulpd  ymm1, ymm7, ymm1
        vsubpd  ymm2, ymm2, ymm1
        vaddpd  ymm0, ymm0, ymm1
        vblendpd        ymm0, ymm2, ymm0, 6             # ymm0 = ymm2[0],ymm0[1,2],ymm2[3]
        vmulpd  ymm1, ymm12, ymm4
        vsubpd  ymm2, ymm0, ymm1
        vaddpd  ymm0, ymm0, ymm1
        vmulpd  ymm1, ymm15, ymm5
        vsubpd  ymm2, ymm2, ymm1
        vaddpd  ymm0, ymm0, ymm1
        vblendpd        ymm0, ymm2, ymm0, 12            # ymm0 = ymm2[0,1],ymm0[2,3]
        vmovupd ymmword ptr [rdi + 32], ymm0
        pop     rbp
        vzeroupper
        ret
.Lfunc_end0:
        .size   "julia_*_3119", .Lfunc_end0-"julia_*_3119"
                                        # -- End function
        .section        ".note.GNU-stack","",@progbits

So my question is: considering the only difference between x and xx (and y and yy) is the element type, why is the assembly output so starkly different?

I suspect this may have something to do with the fact that both data types are 512 bits wide (and don’t fit into an AVX-256 register), but that shouldn’t make a difference, in principle. The reason I suspect this is because if I use smaller data types, like CliffordNumber{VGA(2),T} or EvenCliffordNumber{VGA(3),T} (which have 4 elements each), the T === Int64 case always vectorizes as the T === Float64 case does.

This performance difference won’t cause me any serious issues at the moment; I’m just interested in why this is happening.

1 Like

Int64 SIMD multiplies tend to be on the slower side (except for Zen4), so perhaps the cost model expects the scalar version to be faster.

AVX512 provides a Int64 multiply instruction, but on my non-Zen4 AVX512 machine, LLVM still doesn’t vectorize the integer version.
Without AVX512, Int64 multiplies are even more expensive because it needs to do Int32 multiplies and combine the results.

7 Likes

That makes sense. I didn’t realize AVX2 doesn’t support 64-bit integer multiplication!

I do wish I had a machine with AVX-512 or AVX10 to work with, since I could actually make heavy use of the extra SIMD width.

Why would a SIMD instruction be slower, is there some overhead, or could it be as fast per value but just isn’t so far?

I am also curious.

Could @brainandforce by any chance modify the source to use SIMD.jl or VectorizationBase.jl to manually SIMD it?

SIMD code can be slower if the SIMD instruction is much slower (e.g., gather on Zen4), or if you need a ton of shuffle instructions to rearrange operations.

The Float64 code does have a lot of shufflevector, so maybe that is it.
The imul instruction itself is fairly slow, so in terms of raw multiplication speed, the SIMD version wins. If it loses, it would be because the scalar version doesn’t need any equivalent of shufflevector.

1 Like

I tried doing this manually by changing the backing NTuple element type to Base.VecElement{T} (branch with the modified code is here), and here’s the assembly output:

julia> @code_native debuginfo = :none x*y
        .text
        .file   "*"
        .globl  "julia_*_2612"                  # -- Begin function julia_*_2612
        .p2align        4, 0x90
        .type   "julia_*_2612",@function
"julia_*_2612":                         # @"julia_*_2612"
# %bb.0:                                # %top
        push    rbp
        mov     rbp, rsp
        push    r15
        push    r14
        push    r13
        push    r12
        push    rbx
        sub     rsp, 160
        mov     r13, qword ptr [rdx + 48]
        mov     r10, qword ptr [rdx + 8]
        mov     rax, rsi
        mov     r15, qword ptr [rsi + 40]
        mov     rsi, qword ptr [rsi + 16]
        mov     r9, rdx
        mov     r8, qword ptr [rdx]
        mov     qword ptr [rbp - 328], rdi      # 8-byte Spill
        mov     rcx, rax
        mov     qword ptr [rbp - 56], r9        # 8-byte Spill
        mov     rdi, qword ptr [rcx + 32]
        mov     r11, qword ptr [rcx + 24]
        mov     r12, rcx
        mov     rax, r13
        mov     rdx, r10
        mov     rcx, r10
        mov     qword ptr [rbp - 104], r13      # 8-byte Spill
        mov     qword ptr [rbp - 48], r8        # 8-byte Spill
        mov     qword ptr [rbp - 112], r12      # 8-byte Spill
        mov     qword ptr [rbp - 64], r15       # 8-byte Spill
        imul    rax, r15
        imul    rdx, rsi
        imul    rcx, r11
        mov     rbx, rdi
        mov     qword ptr [rbp - 72], rdi       # 8-byte Spill
        mov     qword ptr [rbp - 120], r11      # 8-byte Spill
        add     rdx, rax
        mov     rax, r13
        imul    rax, rdi
        mov     qword ptr [rbp - 304], rdx      # 8-byte Spill
        mov     rdx, r8
        mov     rdi, qword ptr [r9 + 16]
        imul    rdx, r15
        add     rcx, rax
        mov     rax, r13
        imul    rax, r11
        mov     qword ptr [rbp - 264], rcx      # 8-byte Spill
        add     rdx, rax
        mov     rax, rdi
        mov     qword ptr [rbp - 312], rdx      # 8-byte Spill
        mov     rdx, qword ptr [r9 + 40]
        imul    rax, r15
        mov     r14, rdx
        imul    r14, rsi
        add     r14, rax
        mov     rax, rdi
        imul    rax, rbx
        mov     rbx, rdx
        mov     qword ptr [rbp - 320], r14      # 8-byte Spill
        imul    rbx, r11
        add     rbx, rax
        mov     rax, rdi
        mov     rdi, rdi
        mov     qword ptr [rbp - 280], rbx      # 8-byte Spill
        mov     rbx, qword ptr [r12]
        mov     rcx, rax
        mov     qword ptr [rbp - 80], rax       # 8-byte Spill
        imul    rdi, rbx
        mov     qword ptr [rbp - 200], rdi      # 8-byte Spill
        mov     rdi, r13
        mov     r13, r13
        imul    r13, rbx
        mov     r14, rdi
        mov     qword ptr [rbp - 288], r13      # 8-byte Spill
        mov     r13, qword ptr [r12 + 8]
        imul    r14, r13
        imul    rcx, r13
        mov     qword ptr [rbp - 296], r14      # 8-byte Spill
        mov     r14, rax
        mov     qword ptr [rbp - 208], rcx      # 8-byte Spill
        mov     rcx, rax
        imul    r14, rsi
        imul    r11, rcx
        mov     rcx, rdx
        imul    rcx, rbx
        mov     qword ptr [rbp - 176], r14      # 8-byte Spill
        mov     r14, rdi
        mov     rdi, qword ptr [r9 + 56]
        imul    r14, rsi
        mov     qword ptr [rbp - 224], rcx      # 8-byte Spill
        mov     rcx, rdx
        imul    rcx, r13
        vmovq   xmm1, rdi
        mov     qword ptr [rbp - 96], rdi       # 8-byte Spill
        vpbroadcastq    ymm1, xmm1
        mov     qword ptr [rbp - 248], r14      # 8-byte Spill
        mov     r14, qword ptr [r9 + 24]
        mov     qword ptr [rbp - 216], rcx      # 8-byte Spill
        mov     rcx, qword ptr [rbp - 104]      # 8-byte Reload
        mov     rax, r14
        imul    rax, rbx
        mov     qword ptr [rbp - 192], rax      # 8-byte Spill
        mov     rax, r14
        imul    rax, r13
        mov     qword ptr [rbp - 184], rax      # 8-byte Spill
        mov     rax, r14
        imul    rax, rsi
        mov     qword ptr [rbp - 168], rax      # 8-byte Spill
        mov     rax, rdi
        imul    rax, rbx
        mov     qword ptr [rbp - 272], rax      # 8-byte Spill
        mov     rax, rdi
        imul    rax, r13
        mov     qword ptr [rbp - 256], rax      # 8-byte Spill
        mov     rax, rdi
        imul    rax, rsi
        mov     qword ptr [rbp - 240], rax      # 8-byte Spill
        mov     rax, r10
        imul    rax, rbx
        mov     qword ptr [rbp - 152], rax      # 8-byte Spill
        mov     rax, r10
        imul    rax, r13
        mov     qword ptr [rbp - 136], rax      # 8-byte Spill
        mov     rax, r8
        imul    rax, rbx
        mov     qword ptr [rbp - 128], rax      # 8-byte Spill
        mov     rax, r8
        imul    r8, rsi
        imul    rax, r13
        mov     qword ptr [rbp - 160], r8       # 8-byte Spill
        mov     r8, qword ptr [r9 + 32]
        mov     qword ptr [rbp - 144], rax      # 8-byte Spill
        mov     rax, rsi
        mov     rsi, rdi
        imul    rax, r8
        imul    rbx, r8
        imul    r13, r8
        mov     r9, r8
        imul    r8, r15
        mov     qword ptr [rbp - 232], rax      # 8-byte Spill
        mov     rax, qword ptr [r12 + 56]
        add     r8, r11
        mov     r11, qword ptr [r12 + 48]
        mov     r12, rcx
        imul    rsi, rax
        imul    r12, r11
        imul    rcx, rax
        mov     qword ptr [rbp - 88], rax       # 8-byte Spill
        add     r12, rsi
        mov     rsi, rdx
        imul    rsi, r15
        mov     r15, qword ptr [rbp - 120]      # 8-byte Reload
        add     r12, rsi
        mov     rsi, rdi
        imul    rdi, r15
        imul    r15, r14
        imul    rsi, r11
        add     r12, r15
        mov     r15, qword ptr [rbp - 72]       # 8-byte Reload
        add     rsi, rcx
        mov     rcx, qword ptr [rbp - 56]       # 8-byte Reload
        imul    r9, r15
        imul    rdx, r15
        mov     r15, qword ptr [rbp - 112]      # 8-byte Reload
        vmovdqa xmm0, xmmword ptr [rcx + 32]
        vpblendd        ymm1, ymm1, ymmword ptr [rcx + 32], 207 # ymm1 = mem[0,1,2,3],ymm1[4,5],mem[6,7]
        vmovdqa xmm2, xmmword ptr [r15 + 48]
        vpsrlq  xmm3, xmm0, 32
        add     rdx, rsi
        mov     rsi, qword ptr [rbp - 96]       # 8-byte Reload
        imul    rsi, qword ptr [rbp - 64]       # 8-byte Folded Reload
        sub     r9, r12
        add     rdx, qword ptr [rbp - 168]      # 8-byte Folded Reload
        add     r9, qword ptr [rbp - 176]       # 8-byte Folded Reload
        vpsrlq  xmm4, xmm2, 32
        vpmuludq        xmm3, xmm3, xmm2
        vpmuludq        xmm5, xmm0, xmm2
        vinserti128     ymm2, ymm2, xmmword ptr [r15 + 24], 1
        add     r9, qword ptr [rbp - 136]       # 8-byte Folded Reload
        vpmuludq        xmm4, xmm0, xmm4
        vpbroadcastq    ymm0, xmm0
        add     r9, qword ptr [rbp - 128]       # 8-byte Folded Reload
        sub     r8, rdx
        mov     rdx, qword ptr [rbp - 72]       # 8-byte Reload
        add     r8, qword ptr [rbp - 144]       # 8-byte Folded Reload
        add     r8, qword ptr [rbp - 152]       # 8-byte Folded Reload
        vpaddq  xmm3, xmm4, xmm3
        vpsllq  xmm3, xmm3, 32
        vpaddq  xmm3, xmm5, xmm3
        vpshufd xmm4, xmm3, 238                 # xmm4 = xmm3[2,3,2,3]
        vpaddq  xmm3, xmm3, xmm4
        vmovq   r12, xmm3
        vpbroadcastq    ymm3, qword ptr [rcx]
        mov     rcx, r14
        add     r12, rsi
        mov     rsi, qword ptr [rbp - 80]       # 8-byte Reload
        imul    rcx, rax
        mov     rax, qword ptr [rbp - 48]       # 8-byte Reload
        sub     r12, qword ptr [rbp - 264]      # 8-byte Folded Reload
        add     r12, qword ptr [rbp - 160]      # 8-byte Folded Reload
        add     r12, qword ptr [rbp - 184]      # 8-byte Folded Reload
        vpblendd        ymm3, ymm1, ymm3, 192           # ymm3 = ymm1[0,1,2,3,4,5],ymm3[6,7]
        vpshufd ymm1, ymm2, 78                  # ymm1 = ymm2[2,3,0,1,6,7,4,5]
        imul    rsi, r11
        add     r12, qword ptr [rbp - 200]      # 8-byte Folded Reload
        vpsrlq  ymm4, ymm3, 32
        vpsrlq  ymm2, ymm1, 32
        vpmuludq        ymm5, ymm3, ymm2
        vpmuludq        ymm4, ymm4, ymm1
        vpmuludq        ymm3, ymm3, ymm1
        add     rsi, rcx
        mov     rcx, rax
        imul    rax, r11
        imul    r11, r14
        imul    rcx, rdx
        imul    rdx, r10
        vpaddq  ymm4, ymm5, ymm4
        vmovq   xmm5, r9
        vpsllq  ymm4, ymm4, 32
        vpaddq  ymm3, ymm3, ymm4
        vextracti128    xmm4, ymm3, 1
        mov     qword ptr [rbp - 48], rax       # 8-byte Spill
        mov     rax, qword ptr [rbp - 64]       # 8-byte Reload
        vpaddq  xmm3, xmm3, xmm4
        vpshufd xmm4, xmm3, 238                 # xmm4 = xmm3[2,3,2,3]
        vpaddq  xmm3, xmm3, xmm4
        vmovq   xmm4, r8
        vmovq   r15, xmm3
        vmovq   xmm3, r14
        imul    r14, rax
        imul    rax, r10
        sub     r15, qword ptr [rbp - 304]      # 8-byte Folded Reload
        vpbroadcastq    ymm3, xmm3
        add     r15, qword ptr [rbp - 208]      # 8-byte Folded Reload
        add     r15, qword ptr [rbp - 192]      # 8-byte Folded Reload
        add     rax, rsi
        mov     rsi, qword ptr [rbp - 88]       # 8-byte Reload
        add     rdi, rax
        mov     rax, qword ptr [rbp - 312]      # 8-byte Reload
        sub     rcx, rdi
        mov     rdi, qword ptr [rbp - 80]       # 8-byte Reload
        add     rcx, qword ptr [rbp - 248]      # 8-byte Folded Reload
        imul    r10, rsi
        add     rcx, qword ptr [rbp - 216]      # 8-byte Folded Reload
        imul    rdi, rsi
        add     r10, qword ptr [rbp - 48]       # 8-byte Folded Reload
        add     rcx, rbx
        vmovq   xmm7, rcx
        add     r10, r14
        sub     r10, qword ptr [rbp - 280]      # 8-byte Folded Reload
        add     r11, rdi
        mov     rdi, qword ptr [rbp - 56]       # 8-byte Reload
        add     r10, qword ptr [rbp - 232]      # 8-byte Folded Reload
        add     rdx, r11
        add     rdx, qword ptr [rbp - 240]      # 8-byte Folded Reload
        add     r10, qword ptr [rbp - 256]      # 8-byte Folded Reload
        add     r10, qword ptr [rbp - 288]      # 8-byte Folded Reload
        vpblendd        ymm3, ymm3, ymmword ptr [rdi], 207 # ymm3 = mem[0,1,2,3],ymm3[4,5],mem[6,7]
        sub     rax, rdx
        add     rax, r13
        add     rax, qword ptr [rbp - 224]      # 8-byte Folded Reload
        vpblendd        ymm0, ymm3, ymm0, 192           # ymm0 = ymm3[0,1,2,3,4,5],ymm0[6,7]
        vpsrlq  ymm3, ymm0, 32
        vpmuludq        ymm2, ymm0, ymm2
        vpmuludq        ymm0, ymm0, ymm1
        vpmuludq        ymm3, ymm3, ymm1
        vmovq   xmm6, rax
        mov     rax, qword ptr [rbp - 328]      # 8-byte Reload
        vpunpcklqdq     xmm6, xmm7, xmm6        # xmm6 = xmm7[0],xmm6[0]
        vmovdqa xmmword ptr [rax + 32], xmm6
        vpaddq  ymm1, ymm2, ymm3
        vmovq   xmm3, r10
        vpsllq  ymm1, ymm1, 32
        vpaddq  ymm0, ymm0, ymm1
        vextracti128    xmm1, ymm0, 1
        vpaddq  xmm0, xmm0, xmm1
        vpshufd xmm1, xmm0, 238                 # xmm1 = xmm0[2,3,2,3]
        vpaddq  xmm0, xmm0, xmm1
        vmovq   xmm1, r12
        vmovq   rsi, xmm0
        sub     rsi, qword ptr [rbp - 320]      # 8-byte Folded Reload
        vmovq   xmm0, r15
        add     rsi, qword ptr [rbp - 296]      # 8-byte Folded Reload
        vpunpcklqdq     xmm0, xmm1, xmm0        # xmm0 = xmm1[0],xmm0[0]
        add     rsi, qword ptr [rbp - 272]      # 8-byte Folded Reload
        vmovdqa xmmword ptr [rax + 16], xmm0
        vmovq   xmm2, rsi
        vpunpcklqdq     xmm1, xmm3, xmm2        # xmm1 = xmm3[0],xmm2[0]
        vpunpcklqdq     xmm3, xmm5, xmm4        # xmm3 = xmm5[0],xmm4[0]
        vmovdqa xmmword ptr [rax], xmm3
        vmovdqa xmmword ptr [rax + 48], xmm1
        add     rsp, 160
        pop     rbx
        pop     r12
        pop     r13
        pop     r14
        pop     r15
        pop     rbp
        vzeroupper
        ret
.Lfunc_end0:
        .size   "julia_*_2612", .Lfunc_end0-"julia_*_2612"
                                        # -- End function
        .section        ".note.GNU-stack","",@progbits

I’m no expert at assembly, but it looks like this only partially vectorized - perhaps it tries to vectorize one half of the multiply?

In terms of performance, the difference is not that big, though it does favor the vectorized code (top is the experimental version, bottom is the current version).

julia> @benchmark $x*$y
BenchmarkTools.Trial: 10000 samples with 999 evaluations.
 Range (min … max):  11.566 ns … 59.805 ns  β”Š GC (min … max): 0.00% … 0.00%
 Time  (median):     12.061 ns              β”Š GC (median):    0.00%
 Time  (mean Β± Οƒ):   12.133 ns Β±  1.005 ns  β”Š GC (mean Β± Οƒ):  0.00% Β± 0.00%

      β–‚β–ƒβ–ˆ                                                      
  β–‚β–‚β–‚β–…β–ˆβ–ˆβ–ˆβ–†β–ƒβ–‚β–‚β–β–‚β–β–‚β–‚β–β–‚β–β–‚β–‚β–β–β–β–β–β–β–β–β–β–β–β–β–‚β–β–‚β–‚β–β–β–β–‚β–β–β–‚β–‚β–‚β–‚β–‚β–β–‚β–‚β–β–β–‚β–‚β–‚β–‚β–‚β–‚ β–‚
  11.6 ns         Histogram: frequency by time        16.4 ns <

 Memory estimate: 0 bytes, allocs estimate: 0.
julia> @benchmark $x*$y
BenchmarkTools.Trial: 10000 samples with 999 evaluations.
 Range (min … max):  12.597 ns … 108.008 ns  β”Š GC (min … max): 0.00% … 0.00%
 Time  (median):     12.637 ns               β”Š GC (median):    0.00%
 Time  (mean Β± Οƒ):   12.811 ns Β±   2.257 ns  β”Š GC (mean Β± Οƒ):  0.00% Β± 0.00%

  β–ˆβ–ƒ                                                           ▁
  β–ˆβ–ˆβ–ˆβ–…β–†β–β–β–β–ƒβ–β–β–β–β–β–β–β–…β–ƒβ–ƒβ–β–ƒβ–β–‡β–†β–ƒβ–β–„β–β–β–β–„β–†β–„β–β–„β–β–ƒβ–β–β–β–ƒβ–β–β–ƒβ–β–β–β–„β–„β–ƒβ–…β–…β–‡β–„β–ƒβ–ƒβ–„β–…β–†β–† β–ˆ
  12.6 ns       Histogram: log(frequency) by time      17.1 ns <

 Memory estimate: 0 bytes, allocs estimate: 0.

I’ll also try the packages you linked and report back.

2 Likes

That should yield better results. I’ll try actually debugging slp.

If you gave the function clifford_multiply(x::NTuple{8,T}, y::NTuple{8,T}) so only that is needed it would be easier for people to check things out.

This is a bit complicated to explain succinctly: for reference, here is the implementation in CliffordNumbers.jl:

@generated function mul(
    x::AbstractCliffordNumber{Q,T},
    y::AbstractCliffordNumber{Q,T},
    F::GradeFilter = GradeFilter{:*}()
) where {Q,T<:BaseNumber}
    C = product_return_type(x, y, F())
    ex = :($(zero_tuple(C)))
    for a in BitIndices(x)
        inds = bitindex_shuffle(a, BitIndices(C))
        # Filter out multiplications which necessarily go to zero
        x_mask = mul_mask(F(), a, inds)
        # Filter out indexing operations that automatically go to zero
        # This must be done manually since we want to work directly with tuples
        y_mask = map(in, grade.(inds), ntuple(Returns(nonzero_grades(y)), Val(nblades(C))))
        # Don't append operations that won't actually do anything
        if any(x_mask) && any(y_mask)
            # Resolve BitIndex to an integer here to avoid having to call Base.to_index at runtime
            # This function cannot be inlined or unrolled for KVector arguments
            # But all values are known at compile time, so interpolate them into expressions
            ia = to_index(x, a)
            tuple_inds = to_index.(y, inds)
            signs = mul_signs(F(), a, inds)
            # Construct the tuples that contribute to the product
            x_tuple_ex = :(Tuple(x)[$ia] .* $x_mask)
            y_tuple_ex = :(getindex.(tuple(Tuple(y)), $tuple_inds) .* $signs .* $y_mask)
            # Combine the tuples using muladd operations
            ex = :(map(muladd, $x_tuple_ex, $y_tuple_ex, $ex))
        end
    end
    return :(($C)($ex))
end

Below is exactly what this expression lowers to:

:((CliffordNumber{VGA(3)})(map(muladd, (Tuple(x))[8] .* (true, true, true, true, true, true, true, true), getindex.(tuple(Tuple(y)), (8, 7, 6, 5, 4, 3, 2, 1)) .* (-1, -1, 1, 1, -1, -1, 1, 1), map(muladd, (Tuple(x))[7] .* (true, true, true, true, true, true, true, true), getindex.(tuple(Tuple(y)), (7, 8, 5, 6, 3, 4, 1, 2)) .* (-1, -1, 1, 1, -1, -1, 1, 1), map(muladd, (Tuple(x))[6] .* (true, true, true, true, true, true, true, true), getindex.(tuple(Tuple(y)), (6, 5, 8, 7, 2, 1, 4, 3)) .* (-1, 1, 1, -1, -1, 1, 1, -1), map(muladd, (Tuple(x))[5] .* (true, true, true, true, true, true, true, true), getindex.(tuple(Tuple(y)), (5, 6, 7, 8, 1, 2, 3, 4)) .* (1, -1, -1, 1, 1, -1, -1, 1), map(muladd, (Tuple(x))[4] .* (true, true, true, true, true, true, true, true), getindex.(tuple(Tuple(y)), (4, 3, 2, 1, 8, 7, 6, 5)) .* (-1, 1, -1, 1, -1, 1, -1, 1), map(muladd, (Tuple(x))[3] .* (true, true, true, true, true, true, true, true), getindex.(tuple(Tuple(y)), (3, 4, 1, 2, 7, 8, 5, 6)) .* (1, -1, 1, -1, 1, -1, 1, -1), map(muladd, (Tuple(x))[2] .* (true, true, true, true, true, true, true, true), getindex.(tuple(Tuple(y)), (2, 1, 4, 3, 6, 5, 8, 7)) .* (1, 1, 1, 1, 1, 1, 1, 1), map(muladd, (Tuple(x))[1] .* (true, true, true, true, true, true, true, true), getindex.(tuple(Tuple(y)), (1, 2, 3, 4, 5, 6, 7, 8)) .* (1, 1, 1, 1, 1, 1, 1, 1), (0, 0, 0, 0, 0, 0, 0, 0)))))))))))

And here is my breakdown of this function:

function cliffordnumber_vga3_mul_expanded(x::NTuple{8,T}, y::NTuple{8,T}) where T
    all_inds = ntuple(i -> xor.(i, (0, 1, 2, 3, 4, 5, 6, 7)) .+ 1, Val(8))
    all_signs = (
        ( 1,  1,  1,  1,  1,  1,  1,  1),
        ( 1,  1,  1,  1,  1,  1,  1,  1),
        ( 1, -1,  1, -1,  1, -1,  1, -1),
        (-1,  1, -1,  1, -1,  1, -1,  1),
        ( 1, -1, -1,  1,  1, -1, -1,  1),
        (-1,  1,  1, -1, -1,  1,  1, -1),
        (-1, -1,  1,  1, -1, -1,  1,  1),
        (-1, -1,  1,  1, -1, -1,  1,  1)
    )
    result = ntuple(Returns(zero(T)), Val(8))
    for n in 1:8
        # Multiplication by the all-true tuple is unnecessary
        # But it is part of the generated function
        x_tup = x[n] .* ntuple(Returns(true), Val(8))
        y_tup = getindex.(tuple(y), all_inds[n]) .* all_signs[n]
        result = map(muladd, x_tup, y_tup, result)
    end
    return result
end

And this is the assembly it returns:

        .text
        .file   "cliffordnumber_vga3_mul_expanded"
        .globl  julia_cliffordnumber_vga3_mul_expanded_3557 # -- Begin function julia_cliffordnumber_vga3_mul_expanded_3557
        .p2align        4, 0x90
        .type   julia_cliffordnumber_vga3_mul_expanded_3557,@function
julia_cliffordnumber_vga3_mul_expanded_3557: # @julia_cliffordnumber_vga3_mul_expanded_3557
# %bb.0:                                # %top
        push    rbp
        mov     rbp, rsp
        push    r15
        push    r14
        push    r13
        push    r12
        push    rbx
        sub     rsp, 1080
        movabs  rax, offset .L_j_const1
        vmovups ymm0, ymmword ptr [rax]
        vmovups ymm1, ymmword ptr [rax + 32]
        vmovups ymmword ptr [rbp - 576], ymm1
        vmovups ymmword ptr [rbp - 608], ymm0
        movabs  rax, offset .L_j_const2
        vmovups ymm0, ymmword ptr [rax]
        vmovups ymm1, ymmword ptr [rax + 32]
        vmovups ymmword ptr [rbp - 544], ymm0
        vmovups ymmword ptr [rbp - 512], ymm1
        movabs  rax, offset .L_j_const3
        vmovups ymm0, ymmword ptr [rax]
        vmovups ymm1, ymmword ptr [rax + 32]
        vmovups ymmword ptr [rbp - 480], ymm0
        vmovups ymmword ptr [rbp - 448], ymm1
        movabs  rax, offset .L_j_const4
        vmovups ymm0, ymmword ptr [rax]
        vmovups ymm1, ymmword ptr [rax + 32]
        vmovups ymmword ptr [rbp - 416], ymm0
        vmovups ymmword ptr [rbp - 384], ymm1
        movabs  rax, offset .L_j_const5
        vmovups ymm0, ymmword ptr [rax]
        vmovups ymm1, ymmword ptr [rax + 32]
        vmovups ymmword ptr [rbp - 320], ymm1
        vmovups ymmword ptr [rbp - 352], ymm0
        movabs  rax, offset .L_j_const6
        vmovups ymm0, ymmword ptr [rax]
        vmovups ymm1, ymmword ptr [rax + 32]
        vmovups ymmword ptr [rbp - 256], ymm1
        vmovups ymmword ptr [rbp - 288], ymm0
        movabs  rax, offset .L_j_const7
        vmovups ymm0, ymmword ptr [rax]
        vmovups ymm1, ymmword ptr [rax + 32]
        vmovups ymmword ptr [rbp - 192], ymm1
        vmovups ymmword ptr [rbp - 224], ymm0
        movabs  rax, offset .L_j_const8
        vmovups ymm0, ymmword ptr [rax]
        vmovups ymm1, ymmword ptr [rax + 32]
        vmovups ymmword ptr [rbp - 128], ymm1
        vmovups ymmword ptr [rbp - 160], ymm0
        movabs  rax, offset .L_j_const9
        vmovups ymm0, ymmword ptr [rax]
        vmovups ymm1, ymmword ptr [rax + 32]
        vmovups ymmword ptr [rbp - 1088], ymm1
        vmovups ymmword ptr [rbp - 1120], ymm0
        vmovups ymmword ptr [rbp - 1024], ymm1
        vmovups ymmword ptr [rbp - 1056], ymm0
        movabs  rax, offset .L_j_const10
        vmovups ymm0, ymmword ptr [rax]
        vmovups ymm1, ymmword ptr [rax + 32]
        vmovups ymmword ptr [rbp - 960], ymm1
        vmovups ymmword ptr [rbp - 992], ymm0
        movabs  rax, offset .L_j_const11
        vmovups ymm0, ymmword ptr [rax]
        vmovups ymm1, ymmword ptr [rax + 32]
        vmovups ymmword ptr [rbp - 896], ymm1
        vmovups ymmword ptr [rbp - 928], ymm0
        movabs  rax, offset .L_j_const12
        vmovups ymm0, ymmword ptr [rax]
        vmovups ymm1, ymmword ptr [rax + 32]
        vmovups ymmword ptr [rbp - 832], ymm1
        vmovups ymmword ptr [rbp - 864], ymm0
        movabs  rax, offset .L_j_const13
        vmovups ymm0, ymmword ptr [rax]
        vmovups ymm1, ymmword ptr [rax + 32]
        mov     qword ptr [rbp - 48], rdx       # 8-byte Spill
        mov     qword ptr [rbp - 64], rsi       # 8-byte Spill
        mov     qword ptr [rbp - 56], rdi       # 8-byte Spill
        vmovups ymmword ptr [rbp - 768], ymm1
        vmovups ymmword ptr [rbp - 800], ymm0
        movabs  rax, offset .L_j_const14
        vmovdqu ymm0, ymmword ptr [rax]
        vmovdqu ymm1, ymmword ptr [rax + 32]
        vmovdqu ymmword ptr [rbp - 704], ymm1
        vmovdqu ymmword ptr [rbp - 736], ymm0
        vmovdqu ymmword ptr [rbp - 640], ymm1
        vmovdqu ymmword ptr [rbp - 672], ymm0
        vpxor   xmm0, xmm0, xmm0
        xor     eax, eax
        xor     r13d, r13d
        xor     edx, edx
        xor     r12d, r12d
        xor     edi, edi
        mov     r8, qword ptr [rbp - 48]        # 8-byte Reload
        .p2align        4, 0x90
.LBB0_1:                                # %pass23
                                        # =>This Inner Loop Header: Depth=1
        mov     qword ptr [rbp - 88], rdx       # 8-byte Spill
        mov     r15, qword ptr [rbp + 8*rax - 608]
        lea     rcx, [r15 - 1]
        cmp     rcx, 8
        jae     .LBB0_11
# %bb.2:                                # %pass27
                                        #   in Loop: Header=BB0_1 Depth=1
        mov     r14, qword ptr [rbp + 8*rax - 600]
        lea     rcx, [r14 - 1]
        cmp     rcx, 8
        jae     .LBB0_12
# %bb.3:                                # %pass33
                                        #   in Loop: Header=BB0_1 Depth=1
        mov     r9, qword ptr [rbp + 8*rax - 592]
        lea     rcx, [r9 - 1]
        cmp     rcx, 8
        jae     .LBB0_13
# %bb.4:                                # %pass39
                                        #   in Loop: Header=BB0_1 Depth=1
        mov     r10, qword ptr [rbp + 8*rax - 584]
        lea     rcx, [r10 - 1]
        cmp     rcx, 8
        jae     .LBB0_14
# %bb.5:                                # %pass45
                                        #   in Loop: Header=BB0_1 Depth=1
        mov     rdx, qword ptr [rbp + 8*rax - 576]
        lea     rcx, [rdx - 1]
        cmp     rcx, 8
        jae     .LBB0_15
# %bb.6:                                # %pass51
                                        #   in Loop: Header=BB0_1 Depth=1
        mov     rbx, qword ptr [rbp + 8*rax - 568]
        lea     rcx, [rbx - 1]
        cmp     rcx, 8
        jae     .LBB0_16
# %bb.7:                                # %pass57
                                        #   in Loop: Header=BB0_1 Depth=1
        mov     qword ptr [rbp - 80], rbx       # 8-byte Spill
        mov     r11, qword ptr [rbp + 8*rax - 560]
        lea     rcx, [r11 - 1]
        cmp     rcx, 8
        jae     .LBB0_17
# %bb.8:                                # %pass63
                                        #   in Loop: Header=BB0_1 Depth=1
        mov     qword ptr [rbp - 72], rdi       # 8-byte Spill
        mov     rcx, qword ptr [rbp + 8*rax - 552]
        lea     rsi, [rcx - 1]
        cmp     rsi, 8
        jae     .LBB0_18
# %bb.9:                                # %pass69
                                        #   in Loop: Header=BB0_1 Depth=1
        mov     rbx, rdx
        mov     rsi, qword ptr [rbp - 64]       # 8-byte Reload
        mov     rsi, qword ptr [rsi + rax]
        mov     rdi, qword ptr [r8 + 8*r15 - 8]
        imul    rdi, rsi
        imul    rdi, qword ptr [rbp + 8*rax - 1120]
        add     r13, rdi
        mov     rdi, qword ptr [r8 + 8*r14 - 8]
        imul    rdi, rsi
        imul    rdi, qword ptr [rbp + 8*rax - 1112]
        mov     rdx, qword ptr [rbp - 88]       # 8-byte Reload
        add     rdx, rdi
        mov     rdi, qword ptr [r8 + 8*r9 - 8]
        imul    rdi, rsi
        imul    rdi, qword ptr [rbp + 8*rax - 1104]
        add     r12, rdi
        vpbroadcastq    ymm1, rsi
        imul    rsi, qword ptr [r8 + 8*r10 - 8]
        vmovq   xmm2, qword ptr [r8 + 8*rcx - 8] # xmm2 = mem[0],zero
        vmovq   xmm3, qword ptr [r8 + 8*r11 - 8] # xmm3 = mem[0],zero
        mov     rcx, qword ptr [rbp - 80]       # 8-byte Reload
        vmovq   xmm4, qword ptr [r8 + 8*rcx - 8] # xmm4 = mem[0],zero
        vmovq   xmm5, qword ptr [r8 + 8*rbx - 8] # xmm5 = mem[0],zero
        vpunpcklqdq     xmm2, xmm3, xmm2        # xmm2 = xmm3[0],xmm2[0]
        imul    rsi, qword ptr [rbp + 8*rax - 1096]
        vpunpcklqdq     xmm3, xmm5, xmm4        # xmm3 = xmm5[0],xmm4[0]
        vinserti128     ymm2, ymm3, xmm2, 1
        vpmullq ymm1, ymm2, ymm1
        vmovq   xmm2, qword ptr [rbp + 8*rax - 1080] # xmm2 = mem[0],zero
        mov     rdi, qword ptr [rbp - 72]       # 8-byte Reload
        add     rdi, rsi
        vmovq   xmm3, qword ptr [rbp + 8*rax - 1088] # xmm3 = mem[0],zero
        vpunpcklqdq     xmm2, xmm3, xmm2        # xmm2 = xmm3[0],xmm2[0]
        vinserti128     ymm2, ymm2, xmmword ptr [rbp + 8*rax - 1072], 1
        vpmullq ymm1, ymm1, ymm2
        vpaddq  ymm0, ymm1, ymm0
        add     rax, 8
        cmp     rax, 64
        jne     .LBB0_1
# %bb.10:                               # %guard_exit77
        mov     rax, qword ptr [rbp - 56]       # 8-byte Reload
        mov     qword ptr [rax], r13
        mov     qword ptr [rax + 8], rdx
        mov     qword ptr [rax + 16], r12
        mov     qword ptr [rax + 24], rdi
        vmovdqu ymmword ptr [rax + 32], ymm0
        add     rsp, 1080
        pop     rbx
        pop     r12
        pop     r13
        pop     r14
        pop     r15
        pop     rbp
        vzeroupper
        ret
.LBB0_12:                               # %fail32
        movabs  rax, offset ijl_bounds_error_unboxed_int
        movabs  rsi, 140220260244864
        mov     rdi, qword ptr [rbp - 48]       # 8-byte Reload
        mov     rdx, r14
        vzeroupper
        call    rax
.LBB0_13:                               # %fail38
        movabs  rax, offset ijl_bounds_error_unboxed_int
        movabs  rsi, 140220260244864
        mov     rdi, qword ptr [rbp - 48]       # 8-byte Reload
        mov     rdx, r9
        vzeroupper
        call    rax
.LBB0_14:                               # %fail44
        movabs  rax, offset ijl_bounds_error_unboxed_int
        movabs  rsi, 140220260244864
        mov     rdi, qword ptr [rbp - 48]       # 8-byte Reload
        mov     rdx, r10
        vzeroupper
        call    rax
.LBB0_15:                               # %fail50
        movabs  rax, offset ijl_bounds_error_unboxed_int
        movabs  rsi, 140220260244864
        mov     rdi, qword ptr [rbp - 48]       # 8-byte Reload
        vzeroupper
        call    rax
.LBB0_16:                               # %fail56
        movabs  rax, offset ijl_bounds_error_unboxed_int
        movabs  rsi, 140220260244864
        mov     rdi, qword ptr [rbp - 48]       # 8-byte Reload
        mov     rdx, rbx
        vzeroupper
        call    rax
.LBB0_17:                               # %fail62
        movabs  rax, offset ijl_bounds_error_unboxed_int
        movabs  rsi, 140220260244864
        mov     rdi, qword ptr [rbp - 48]       # 8-byte Reload
        mov     rdx, r11
        vzeroupper
        call    rax
.LBB0_18:                               # %fail68
        movabs  rax, offset ijl_bounds_error_unboxed_int
        movabs  rsi, 140220260244864
        mov     rdi, qword ptr [rbp - 48]       # 8-byte Reload
        mov     rdx, rcx
        vzeroupper
        call    rax
.LBB0_11:                               # %fail26
        movabs  rax, offset ijl_bounds_error_unboxed_int
        movabs  rsi, 140220260244864
        mov     rdi, qword ptr [rbp - 48]       # 8-byte Reload
        mov     rdx, r15
        vzeroupper
        call    rax
.Lfunc_end0:
        .size   julia_cliffordnumber_vga3_mul_expanded_3557, .Lfunc_end0-julia_cliffordnumber_vga3_mul_expanded_3557
                                        # -- End function
        .type   .L_j_const1,@object             # @_j_const1
        .section        .rodata,"a",@progbits
        .p2align        3
.L_j_const1:
        .quad   1                               # 0x1
        .quad   2                               # 0x2
        .quad   3                               # 0x3
        .quad   4                               # 0x4
        .quad   5                               # 0x5
        .quad   6                               # 0x6
        .quad   7                               # 0x7
        .quad   8                               # 0x8
        .size   .L_j_const1, 64

        .type   .L_j_const2,@object             # @_j_const2
        .p2align        3
.L_j_const2:
        .quad   2                               # 0x2
        .quad   1                               # 0x1
        .quad   4                               # 0x4
        .quad   3                               # 0x3
        .quad   6                               # 0x6
        .quad   5                               # 0x5
        .quad   8                               # 0x8
        .quad   7                               # 0x7
        .size   .L_j_const2, 64

        .type   .L_j_const3,@object             # @_j_const3
        .p2align        3
.L_j_const3:
        .quad   3                               # 0x3
        .quad   4                               # 0x4
        .quad   1                               # 0x1
        .quad   2                               # 0x2
        .quad   7                               # 0x7
        .quad   8                               # 0x8
        .quad   5                               # 0x5
        .quad   6                               # 0x6
        .size   .L_j_const3, 64

        .type   .L_j_const4,@object             # @_j_const4
        .p2align        3
.L_j_const4:
        .quad   4                               # 0x4
        .quad   3                               # 0x3
        .quad   2                               # 0x2
        .quad   1                               # 0x1
        .quad   8                               # 0x8
        .quad   7                               # 0x7
        .quad   6                               # 0x6
        .quad   5                               # 0x5
        .size   .L_j_const4, 64

        .type   .L_j_const5,@object             # @_j_const5
        .p2align        3
.L_j_const5:
        .quad   5                               # 0x5
        .quad   6                               # 0x6
        .quad   7                               # 0x7
        .quad   8                               # 0x8
        .quad   1                               # 0x1
        .quad   2                               # 0x2
        .quad   3                               # 0x3
        .quad   4                               # 0x4
        .size   .L_j_const5, 64

        .type   .L_j_const6,@object             # @_j_const6
        .p2align        3
.L_j_const6:
        .quad   6                               # 0x6
        .quad   5                               # 0x5
        .quad   8                               # 0x8
        .quad   7                               # 0x7
        .quad   2                               # 0x2
        .quad   1                               # 0x1
        .quad   4                               # 0x4
        .quad   3                               # 0x3
        .size   .L_j_const6, 64

        .type   .L_j_const7,@object             # @_j_const7
        .p2align        3
.L_j_const7:
        .quad   7                               # 0x7
        .quad   8                               # 0x8
        .quad   5                               # 0x5
        .quad   6                               # 0x6
        .quad   3                               # 0x3
        .quad   4                               # 0x4
        .quad   1                               # 0x1
        .quad   2                               # 0x2
        .size   .L_j_const7, 64

        .type   .L_j_const8,@object             # @_j_const8
        .p2align        3
.L_j_const8:
        .quad   8                               # 0x8
        .quad   7                               # 0x7
        .quad   6                               # 0x6
        .quad   5                               # 0x5
        .quad   4                               # 0x4
        .quad   3                               # 0x3
        .quad   2                               # 0x2
        .quad   1                               # 0x1
        .size   .L_j_const8, 64

        .type   .L_j_const9,@object             # @_j_const9
        .p2align        3
.L_j_const9:
        .quad   1                               # 0x1
        .quad   1                               # 0x1
        .quad   1                               # 0x1
        .quad   1                               # 0x1
        .quad   1                               # 0x1
        .quad   1                               # 0x1
        .quad   1                               # 0x1
        .quad   1                               # 0x1
        .size   .L_j_const9, 64

        .type   .L_j_const10,@object            # @_j_const10
        .p2align        3
.L_j_const10:
        .quad   1                               # 0x1
        .quad   -1                              # 0xffffffffffffffff
        .quad   1                               # 0x1
        .quad   -1                              # 0xffffffffffffffff
        .quad   1                               # 0x1
        .quad   -1                              # 0xffffffffffffffff
        .quad   1                               # 0x1
        .quad   -1                              # 0xffffffffffffffff
        .size   .L_j_const10, 64

        .type   .L_j_const11,@object            # @_j_const11
        .p2align        3
.L_j_const11:
        .quad   -1                              # 0xffffffffffffffff
        .quad   1                               # 0x1
        .quad   -1                              # 0xffffffffffffffff
        .quad   1                               # 0x1
        .quad   -1                              # 0xffffffffffffffff
        .quad   1                               # 0x1
        .quad   -1                              # 0xffffffffffffffff
        .quad   1                               # 0x1
        .size   .L_j_const11, 64

        .type   .L_j_const12,@object            # @_j_const12
        .p2align        3
.L_j_const12:
        .quad   1                               # 0x1
        .quad   -1                              # 0xffffffffffffffff
        .quad   -1                              # 0xffffffffffffffff
        .quad   1                               # 0x1
        .quad   1                               # 0x1
        .quad   -1                              # 0xffffffffffffffff
        .quad   -1                              # 0xffffffffffffffff
        .quad   1                               # 0x1
        .size   .L_j_const12, 64

        .type   .L_j_const13,@object            # @_j_const13
        .p2align        3
.L_j_const13:
        .quad   -1                              # 0xffffffffffffffff
        .quad   1                               # 0x1
        .quad   1                               # 0x1
        .quad   -1                              # 0xffffffffffffffff
        .quad   -1                              # 0xffffffffffffffff
        .quad   1                               # 0x1
        .quad   1                               # 0x1
        .quad   -1                              # 0xffffffffffffffff
        .size   .L_j_const13, 64

        .type   .L_j_const14,@object            # @_j_const14
        .p2align        3
.L_j_const14:
        .quad   -1                              # 0xffffffffffffffff
        .quad   -1                              # 0xffffffffffffffff
        .quad   1                               # 0x1
        .quad   1                               # 0x1
        .quad   -1                              # 0xffffffffffffffff
        .quad   -1                              # 0xffffffffffffffff
        .quad   1                               # 0x1
        .quad   1                               # 0x1
        .size   .L_j_const14, 64

        .type   .L_j_const15,@object            # @_j_const15
        .p2align        3
.L_j_const15:
        .zero   64
        .size   .L_j_const15, 64

        .section        ".note.GNU-stack","",@progbits