Julia programs now shown on benchmarks game website

Why not AOT Julia? It’s already becoming a standard thing to do when deploying Julia. Why would you AOT C and not Julia? I wouldn’t call it a stunt, just a standard way of using the language if this is the application. For example, if all you want are some simple Float64 ODE integrators, we’re setting up the diffeqpy and diffeqr solvers to soon use an AOT compiled version for the Python and R bindings. It seems like this is a very similar use case (one of the problems is a simple ODE solve that’s called once!), so I’m not sure why you would exclude standard ways of using Julia from a benchmark on Julia?

If anything it should at least get a listing in there as Julia (AOT).

8 Likes

For reference, Dart is already benchmarked both as “Dart” and “Dart aot”:

https://benchmarksgame-team.pages.debian.net/benchmarksgame/fastest/dart-dartaot.html

@igouy: Isaac, could you please elaborate on why you’d prefer not to do something similar for Julia? Is it just the added maintenance burden you object to, or is there a philosophical reason?

6 Likes

I wonder why “Dart aot” is much slower than Dart in some cases(the difference is really huge in regex-redux)…What happens here?

This is to be expected in languages, that offer AOT compilation, but don’t have a JIT… It means, they’ll need to deoptimize and basically interpret code in the binary (since they need to execute code, that they haven’t fully inferred ahead of time). Such a JIT-less AOT compilation could also be implemented for Julia :wink: But currently, we only have AOT compilation, that will still JIT compile at runtime, if it runs into code that couldn’t be fully inferred/AOT compiled! That’s also the reason, why most Julia AOT binaries still have compilation overhead, but no actual runtime overhead.

Dart is already benchmarked both as “Dart” and “Dart aot”

and Dart snapshot and Dart exe !

Just me being curious about those different build and packaging options.