Benchmark for latest julia?

I posted this PR a few days back. It gets test/perf/micro running again on julia-0.7.0. It’s a few Makefile changes for libraries that have moved in the julia source tree since 0.4.0.
https://github.com/JuliaLang/julia/pull/23922

A few things/decisions remain before producing new benchmark data for publication:

  1. Getting Fortran to call BLAS. I tried @Ralph_Smith’s suggestion above but got utterly horrific rand_mat_mul results (~100 times slower). I need to double-check that I linked to the correct -lblas, or that there wasn’t some problem in the interface with wrong integer size.

  2. Getting R, Python, and Octave to call BLAS for rand_mat_mul. It looks like that’ll require recompiling and relinking these packages (instead of using my non-BLAS default packages on openSUSE). I’m not keen on doing this, and I’m not sure it’s fair.

  3. Investigate slowness of mandel on C compared to Java, Javascript, and Lua. I’d welcome help here.

  4. Rename the benchmarks. This idea has come up repeatedly as a way to clarify we are testing recursion, not the optimal fib algorithm, for example. My proposed renaming

old new
fib recursion_fibonacci
quicksort recursion_quicksort
pi_sum iteration_pi_sum
mandel iteration_mandelbrot
parse_int parse_integers
rand_mat_mul matrix_multiply
rand_mat_stat matrix_statistics
printfd print_decimals

Do those look good?

17 Likes