Benchmark for latest julia?

I noticed that there is a benchmark table for julia 0.4 on the official website. Why no one is updating it to julia 0.6?

1 Like

https://github.com/JuliaLang/julialang.github.com/issues/420

1 Like

Running the benchmarks looks like it involves having current licences for Mathematica and Matlab, which are quite costly. This prevents many people from being able to run them, or help out.

Someone working in a university with knowledge of most of the languages, and plenty of spare time, and access to suitably licensed machines could do the job, but where you would find someone with these qualifications? :grinning:

I just checked, matlab is around 2k€ and mathematica around 3.5k€ (single person, local license, but there are now also cloud/pay per use schemes) which is not nice, but managable. But actually it (imho) should possible to redo the orginal campain at the place it was done (i’d assume MIT…)?

Also I think academic users get cheaper purchasing opportunities. (Microsoft Office is now given away to first-year undergraduates!)

It seems both matlab and mathematica have a free trial period, so one should be able to do it for free, it just requires a bit of time downloading and installing everything.

I though about that, too. But, i guess there is a reason, why this is called ‘trial period’, so there are limitations what you can do with the results…

Thanks for pointing this out the need for updating the benchmarks. I’ll try to do this in the coming week. I have academic licenses, an interest in Julia benchmarking, and a benchmark that I’d like to add to the suite (if others deem appropriate). So it’d be a good exercise for me.

9 Likes

@John_Gibson same here. I’m in a university, I have access to all of these on my office computer. I’m not super familiar with all of the languages used in the original benchmark, but I could definitely run 5/6 of them without trouble (especially if I’m just handed the code!). You take a crack at it when you get a chance. I’ll do the same when I can (probably not for a few weeks).

There is just the issue of getting my hands on an Intel(R) Xeon(R) CPU E7-8850 2.00GHz CPU with 1TB of 1067MHz DDR3 RAM machine … :yum:

3 Likes

Here’s what I have so far. Machine is Intel Core i7-3960X CPU @ 3.30GHz 64 GB DDR3 RAM running openSUSE LEAP 42.2. Julia is julia-0.7.0-DEV compiled with MARCH = native (not 0.6.0 as stated previously). Had to hack the tests/perf/micro Makefile a bit, and I’m having trouble with the lua, java, octave, and scala benchmarks. Some of that is just getting those systems installed on the machine. But I expect I’ll figure it out.

16 Likes

Could be nice to have various Julia versions on that plot?

4 Likes

Ah, it’s so satisfying seeing Python and R get smoked. :smoking:

Oh god, I just had a scary thought: instead of looking at the above plot and concluding that they should use Julia, will some people look at the above plot and conclude that they should use… Javascript? The horror… the horror :skull:

4 Likes

I don’t think that would work graphically since the different Julia versions would presumably be very close to each other, and it would be hard to see the difference between them at that zoom level. Better to just use the latest Julia release in the cross language comparison and then make a separate figure for various Julia versions, all nice and zoomed in.

1 Like

The rand_mat_mul of Fortran seems wrong, otherwise a quick comparison with the table on https://julialang.org/ seem about right. Julia improved!

I noticed that, too, and a few other discrepancies. I’ll look into it. I’ll also post the raw data table when I get back to the office Monday.

Also I think you have to start Matlab in single threaded mode (-singleCompThread), maybe there’s other gotcha of the sort. That javascript point is a bit suspicous…

That machine died so we can’t do that anymore. The core Julia devs aren’t at MIT anymore either so we don’t have access to educational licenses for those systems.

4 Likes

With further work I got the Octave and Java benchmarks to run.

Most of the Go benchmarks run correctly, but rand_mat_mult and rand_mat_stat seg fault during calls to the gonum-blas libraries. I haven’t been able to run the Lua benchmarks for lack of gsl-shell (no binaries available, can’t get it to compile).

I’m puzzled by two things: the improvement of mandel performance for Java and Javascript, and the worsening of rand_mat_mul for Octave, Python, and R. At this point think I should make a PR for my fixes to julia/test/perf/micro so that others can run the benchmarks and try to make sense of those changes, or maybe try to get Lua and Go working, too.

I’m inclined to wait to resolve these questions before making a PR to julialang.org to update the benchmark data and plot, unless people here think otherwise.

Also, I like showing C==1 on the plot, and I’d prefer ordering languages with generally faster towards the left and generally slower towards the right. How’s that with everyone?

7 Likes

Actually, where’s the test code? I am surprised by javascript performance and want to try it locally. I have all the licenses and a working machine, but I don’t know go and lua. Are go and lua popular among scientific communities? If not, we can skip their tests.

The benchmark codes are in julia/test/perf/micro of the julia lang source tree, viewable online here. There are a few problems with the Makefile which prevent the benchmarks from compiling and running on julia-0.6.0 and julia-0.7-DEV, mainly changes in the location of a few libraries (e.g. libopenblas64_.a) within the julia source tree.

I imagined Go and LuaJIT are included in the benchmarks not because these are good science languages, but because they’re prominent languages with just-in-time compilation.

1 Like