Julia programs now shown on benchmarks game website

Well, you should just then translate the Chapel code to Julia: you will get a better program!

1 Like

That’s exactly what I did.

Since you have the data, any chance you could make a list/table of each Julia program sorted by gzipped length relative to the shortest program for each benchmark?

1 Like

Of course, I’m just not quite sure what you are asking for exactly? 10 tables, one for each benchmark, only Julia programs (but all of them), sorted by factor of gzipped length vs. shortes length?

If you prefer, I can also clean up my messy script and provide that.

Edit: here we go, the script to get the data and create the plot.

1 Like

I’m way out of my depth in this conversation in general, but I wanted to step in for just this moment and say that Haskell is a very nice, very fun language to learn and use. It’s not as pragmatic as Julia (… or, like, any other language), but learning it is a real treat. Programming in general is meat and potatoes (and some of it is boring veggies). Haskell is dessert.

2 Likes

The Julia version on the website got updated to 1.3 . Unless I managed to gaslight myself, this made the benchmarks run a bit faster: Toy benchmark program busy seconds compared to the fastest for selected programming language implementations.

Julia is now listed ahead of Java, which is an impressive achievement. The only remaining GCed language which is still ahead of Julia on the benchmarks game is C#.

15 Likes

A few months back I run some of the C and Julia benchmarks on my laptop. Excluding compilation time, Julia was much faster than what shown on the website and typically on par with C. I wonder what would happen to your plot if you removed compilation time, i.e. running code twice, after a first “warm-up”.

Including jit warmup time is an intentional design decision of the benchmark game. It hurts languages like Java as well. Many of the tasks are compute-intensive enough that Julia’s compilation time doesn’t matter much, but not all are. Julia’s high rank including compilation I think testifies strongly to the strength of Julia’s approach.

7 Likes

If that’s the case they should include compilation time for the AOT languages.

11 Likes

There’s some sense in their approach in that with the AOT languages you can just compile a binary once and run it multiple times and even distribute it. For Julia, JIT compilation has to happen again every time you quit and restart julia.

However, perhaps this means that there’s room for us to pull a stunt with PackageCompiler where we do a static step, building a custom sysimage for the BenchmarksGames tests with all methods precompiled.

6 Likes

Dropping the compilation time can be a bit misleading even for AOT languages. Languages like C++ and Rust move some computations to compile time, this places them in front of C and Fortran in some benchmarks, which is definitely not true. Also, compilation time in Julia seems mostly constant, regardless of the input size, which is really great.

2 Likes

We are quite capable of this in Julia as well. Compile time computations are incredibly important for high performance computing in general and I see no reason why it should be regarded as cheating the benchmarks.

4 Likes

Languages like C++ and Rust move some computations to compile time

Please be specific — which programs ?

1 Like

We are quite capable of this in Julia as well.

Please don’t.

1 Like

I think that this is considered cheating by the rules of the game.

Also, I am not sure if the Julia community really needs to prove anything so desperately at this point that rigging this game should even be considered (assuming we could, which I doubt). I think it is widely recognized that this is just a game, not a competition to pick the “fastest” language.

4 Likes

Why not AOT Julia? It’s already becoming a standard thing to do when deploying Julia. Why would you AOT C and not Julia? I wouldn’t call it a stunt, just a standard way of using the language if this is the application. For example, if all you want are some simple Float64 ODE integrators, we’re setting up the diffeqpy and diffeqr solvers to soon use an AOT compiled version for the Python and R bindings. It seems like this is a very similar use case (one of the problems is a simple ODE solve that’s called once!), so I’m not sure why you would exclude standard ways of using Julia from a benchmark on Julia?

If anything it should at least get a listing in there as Julia (AOT).

8 Likes

For reference, Dart is already benchmarked both as “Dart” and “Dart aot”:

https://benchmarksgame-team.pages.debian.net/benchmarksgame/fastest/dart-dartaot.html

@igouy: Isaac, could you please elaborate on why you’d prefer not to do something similar for Julia? Is it just the added maintenance burden you object to, or is there a philosophical reason?

6 Likes

I wonder why “Dart aot” is much slower than Dart in some cases(the difference is really huge in regex-redux)…What happens here?

This is to be expected in languages, that offer AOT compilation, but don’t have a JIT… It means, they’ll need to deoptimize and basically interpret code in the binary (since they need to execute code, that they haven’t fully inferred ahead of time). Such a JIT-less AOT compilation could also be implemented for Julia :wink: But currently, we only have AOT compilation, that will still JIT compile at runtime, if it runs into code that couldn’t be fully inferred/AOT compiled! That’s also the reason, why most Julia AOT binaries still have compilation overhead, but no actual runtime overhead.

Dart is already benchmarked both as “Dart” and “Dart aot”

and Dart snapshot and Dart exe !

Just me being curious about those different build and packaging options.