Benchmarks are often not very interesting for people doing numerics. I have (actually with the help of two colleagues) tried to make simple but (we hope) interesting numerical benchmarks. We focused on coding and testing numerical algorithms; as an example, consider Gaussian elimination: this is something you will not use in practice, as you will use, say, lapack routines; but Gaussian elimination is a prototype of array manipulation.
We have compared C++, Python, Python+Pythran, Python+Numba (with some variations) on a small set of problems)., ranging from very simple ones (linear combination) to Weno solution of (simple) hyperbolic PDE problems.
But we are not sure that our Julia coding cannot be improved. If someone wants to have a look, to, propose improvements, you are welcome.
I didn’t look at the Julia code, but skimming through the benchmarks, it looks like Julia did quite well.
EDIT: I haven’t actually messed with sparse matrices, but I’m surprised all the languages seemed to hang around 3x slower than C++ for many of those benchmarks. I would have thought runtime is dominated by a sparse matrix library?
Also, the stiffness data wants to be SArray/MArray. For C++ you tell the compiler about the size of your tensors, while the julia compiler does not get that info. Since the sizes are 2 x 3 x 6, i.e. tiny, this can make a very large difference. (you don’t need to hard-code sizes; a type-unstable function boundary for entering the hot loop is very pragmatic and fast)
Also, the stiffness data wants to be SArray/MArray. For C++ you tell the
compiler about the size of your tensors, while the julia compiler does
not get that info. Since the sizes are 2 x 3 x 6, i.e. tiny, this can
make a very large difference. (you don’t need to hard-code sizes; a
type-unstable function boundary for entering the hot loop is very
pragmatic and fast)
Very interesting.
You can propose a modification (that is to say a pull request)
t.