This site benchmarks programming languages on exemplary tasks. Interesting are the Julia VS Zig benchmarks, since Zig, as Julia, uses LLVM as optimizing compiler backend.
While Zig can be faster than Julia by a margin, it consumes incredibly little memory.
The expanded form of Printf.@printf macro takes a significant time to compile, accounting for 25% of the total program runtime. Base.Ryu.writefixed(::Float64, ::Int) should already be compiled into the default system image.
The difference may come from compilation times, no?
The big difference is that Julia supports eval (and defining new methods at runtime) which neither crystal nor zig do as far as I’m aware. This means that the Julia runtime currently has to include LLVM so code generated at runtime can be compiled.
excuse my ignorance here, but the paper does not define what SPD means so far as i can tell.
if SPD is a reference to symmetric packed matrices, then i have a new package in the works that might help. PackedArrays.jl [EDIT: now SymmetricFormats.jl due to small Damerau-Levenshtein distance with PartedArrays.jl ], which is still currently private, but for which i can add some documentation too and make public next week, defines a SymmetricPacked type to mirror LinearAlgebra.Symmetric. the columns of either the upper or lower triangle are concatenated into a vector just like BLAS expects them to be.
i have also gotten merged a couple PRs of mine related to packed symmetric matrices recently. BLAS has many other subroutines which input packed arrays, but spmv and spr were the only ones i needed: