Julia vs Zig surprise

Recently ran into two performance comparisons including Julia.


shows that using BLAS for dense matrix operations, all libraries or languages like Armadillo, Eigen, Julia, Matlab, NumPy, Octave, R give comparable performance, within a small margin.

The actual surprise comes here:

This site benchmarks programming languages on exemplary tasks. Interesting are the Julia VS Zig benchmarks, since Zig, as Julia, uses LLVM as optimizing compiler backend.
While Zig can be faster than Julia by a margin, it consumes incredibly little memory.

Something to learn?


Input: 8000

lang code time stddev peak-mem time(user) time(kernel) compiler/runtime
zig 2.zig 895ms 1.9ms 0.3MB 1720ms 0ms zig 0.10.0-dev
julia 2.jl 1269ms 5.6ms 198.7MB 2133ms 140ms julia 1.7.1
julia 3.jl 2051ms 36ms 203.3MB 3577ms 147ms julia 1.7.1
zig 1.zig 3848ms 0.6ms 0.2MB 3840ms 0ms zig 0.10.0-dev

Input: 4000

lang code time stddev peak-mem time(user) time(kernel) compiler/runtime
zig 2.zig 236ms 4.1ms 0.2MB 420ms 0ms zig 0.10.0-dev
julia 2.jl 614ms 3.0ms 195.2MB 860ms 130ms julia 1.7.1
julia 3.jl 832ms 5.3ms 198.4MB 1200ms 157ms julia 1.7.1
zig 1.zig 956ms 7.2ms 0.1MB 947ms 0ms zig 0.10.0-dev`

Crystal also uses a lot less memory than Julia but more than Zig (expected since it’s GC, not manual memory management) and is also fast.


different LLVM versions!

  • Julia 1.7.1 - LLVM-12.0.1
  • zig 0.10.0-dev - LLVM 13.?

imho :


other important info - some LLVM works on the roadmaps!

  • like : “Separate LLVM/codegen from runtime” / etc.

see : The State of Julia | JuliaCon 2021 | Stefan Karpinksi, Viral Shah, Jeff Bezanson, & Keno Fischer - YouTube


Zig does not have a runtime etc so it isn’t surprising Julia is using more memory?


No but Crystal does have a runtime and still uses a lot less memory. Chalk it up to static vs dynamic typing?

1 Like

Considering the comment in 2.jl:

The expanded form of Printf.@printf macro takes a significant time to compile, accounting for 25% of the total program runtime. Base.Ryu.writefixed(::Float64, ::Int) should already be compiled into the default system image.

The difference may come from compilation times, no?

Does Crystal have a 170 MB system image that it loads on startup?

1 Like

The big difference is that Julia supports eval (and defining new methods at runtime) which neither crystal nor zig do as far as I’m aware. This means that the Julia runtime currently has to include LLVM so code generated at runtime can be compiled.


luckily no.

I’m sorry, but this is not very surprising given how the test is setup.

Notice that the build step for Julia is just copying the source file over:

The build step for Zig is actually compiling the code:

Thus, the Julia benchmarks include code compilation.


The article / paper you linked to is really interesting: The Linear Algebra Mapping Problem. Current State of Linear Algebra Languages and Libraries.

It shows some patterns which can be optimized.
For instance:

1 Like

Raised an issue
Improve matrix multiplication and solvers · Issue #43799 · JuliaLang/julia (github.com)

excuse my ignorance here, but the paper does not define what SPD means so far as i can tell.

if SPD is a reference to symmetric packed matrices, then i have a new package in the works that might help. PackedArrays.jl, which is still currently private, but for which i can add some documentation too and make public next week, defines a SymmetricPacked type to mirror LinearAlgebra.Symmetric. the columns of either the upper or lower triangle are concatenated into a vector just like BLAS expects them to be.

i have also gotten merged a couple PRs of mine related to packed symmetric matrices recently. BLAS has many other subroutines which input packed arrays, but spmv and spr were the only ones i needed:

1 Like

I think spd is semi-positive definite

1 Like

Or symmetric positive definite.

I only knew of SPD as symmetric positive definite - so that you can do Cholesky.


It’s defined actually, look at Table1 caption: Symmetric Positive Definite.