Does Debian's BenchmarkGames show representative performance?

The “gz” numbers on the benchmarks reflect the source size, after a few simplifying transformations.

source

6 Likes

That seems to have been done already.

Seems to me that the Julia community have every opportunity to make and share whatever detailed comparisons might interest them.

In terms of bench-marking infrastructure across languages see JuliaLang/Microbenchmarks.

There are differences in how the languages are used, so if you want to reflect typical use cases, most consumers of C/C++/Fortran/Rust code don’t sit through compilation.
But developers do.

One could also argue that Julia should be allowed to use -march=native (the default) because this is how it’s generally used, but that the other languages should not be, as they’re generally compiled with generic targets and distributed.
Of course, most code does a lousy job taking advantage of the hardware so that -march=native rarely makes much of a difference, and then the code that is optimized (like BLAS) does ship multiple versions…

3 Likes

And we do match that. The JIT compilation is seen in developer space but in deployments we do system image construction and such before putting Julia into the cloud, airplanes, and other real hardware. What seems to be the more interesting benchmark these days isn’t the interactive development but the speed of the compiled deployments and how much compilation can be removed from that.

4 Likes

Julia in airplanes? My impression was that we could not do hard real time with Julia, or is it for something not latency insensitive?

1 Like

That’s using Julia as a specification format and simulation tool, not to actually run on aircraft.

I can’t give more details of this specific one. But no, you can just ship binaries without compilation latency. This is Julia of 2022, we’re doing this in many applications with PackageCompiler and StaticCompiler. Using the JIT interface is really just a developer interface and generating compiled and/or static code is something we (Julia Computing) are doing daily.

@carstenbauer that’s not related.

3 Likes

Huh, this is news to me. As a fairly novice Julia programmer, I thought the JIT interface was the primary way to program in Julia.

I’ve been aware of StaticCompiler.jl and PackageCompiler.jl for some time, but neither seemed to be production-ready yet.

Are there resources for how to incorporate a binary-based/static code workflow into Julia for, e.g., reducing TTFX?

If TTFX is very inconvenient to you, while developing, you can try it:

But, IMHO, it is more convenient to deal with that using a Revise-based worflow. I would not recommend start building system images to everyone.

At the end you could deliver a binary for users of your package, without the TTFX at all (and that could be used for benchmarking in some contexts). But I personally think that is a necessity that first needs to be demonstrated in the specific domain/application in question.

Concerning the original question of the post, my opinion is that “performance” and " development responsiveness" are two different things. Of course there is a TTFX in Julia that you may or may not include in the benchmark. Yet, I would not call that a measure of “performance”, at least in my field, things that take a few seconds to run are not important, and we care about performance for things that take hours and days to run, for which TTFX and compilation are irrelevant.

2 Likes

Thanks for that great resource! I’ll give PackageCompiler another shot.

Oh yes, Revise is awesome :grinning: I use it on a daily basis, and it’s in my startup.jl. But when developing a package requiring frequent reboots of the REPL (e.g. changing structs, or clearing obsolete function method definitions), you still have to deal with large delays. For my workflow this can be on the order of 30 seconds to ~2+ minutes for each reboot (with Julia 1.7.2).

Binaries are the default for robotics, autonomous systems, and embedded systems. That’s why I’m interested in any developments on static compilation with Julia. There’s a few pain points remaining, but I’m optimistic that options for Julia binaries will keep improving :slightly_smiling_face:.

Completely agree that compilation time doesn’t matter for benchmarking final code. As an aside though, I think responsiveness and TTFX are key aspects of development speed which is also an important aspect to consider. This has been discussed a lot though, so I won’t go down that rabbit hole here.

1 Like

Great. I meant garbage collection related latencies and/or stuff like grantees for certain amount of executions under a certain time. My last thinking was you should be able to do soft real time as long as you don’t allocate, how true/hard is this as of Julia 2022?

Surely you can do hard realtime even allocating if you use StaticTools.jl and StaticCompiler.jl.

1 Like

Ok, pardon me if my question seems a little ignorant since Im not knowledgeable enough in this area, but assuming StaticCompiler/StaticTools/Jet… gets stable enough then what advantage do languages like C++/Rust hold over Julia other than making programmers life harder?

I would argue (minor caveats) that StatiCompiler + StaticTools are “stable enough” for some things in production, see this thread. They will get even better over time, but they’re usable right now.

And answering the question, Rust/C++ and other languages have other sets of features and target objectives, like interfaces for Rust and C++, memory safety for Rust, and the case that both languages are more guided to systems programming. Those features do make them more useful for some task, so Julia will not replace them on what they do well. :slight_smile:

5 Likes

They aren’t perfect, but Julia computing is already building binaries are parts of products. For example, the FMU Accelerator.

And that’s just the most public thing we have on this. We ship all things on the cloud as system images as well, and then many of the cases where people take the simulations and bring them to actual devices uses PackageCompiler/StaticCompiler (again, cannot share too many details on this part, but some may share at JuliaCon!)

6 Likes

I believe PackageCompiler.jl is production-ready, I believe (1.0 came out in Feb 2020 and 2.0 in Nov 2021, now at 2.0.5), for applications, at least used in prod (likely by e.g. for PumasAI company, I guessed what Chris is referring to, but see he mentioned for other stuff).

I do not know of any official statements for any [Julia] package. I suppose it may be inferred when 1.0 version is released, but a lot of code has been used in production before that, including Julia; and e.g. JuMP.jl at 10 years old only recently hit 1.0, was state-of-the-art (and I suppose used) years ago.

I understand you need to be careful when building a sysimage, know the limitations (package manager will not update the packages in it); also in general, especially with experimental software, which I do not know it to be.

StaticCompiler.jl is pre-1.0, and I see “experimental”, but neither has stopped [all] people from using such code in production (e.g. Julia itself pre-1.0, or its Threads API marked “experimental”, back in 0.5 until 1.5 in 2020).

Despite all that, I see (and also Chris mentioning it used in production):

Anyone can make a code-generator in Julia, and that’s also one way, I noticed for (this) restrictive (seemingly?) scenario, possible to compile Julia to C, MATLAB and more:

1 Like

That’s fantastic that Package/StaticCompiler are getting that level of usage! I’m excited to see what future developments are in store for those packages.

A.
Debian’s benchmark game doesn’t allow showing Julia’s fastest speed, but its fork does (or so far only enabled for a few benchmarks, I convinced the guy to allow julia/aot, and thanks @kristoffer.carlsson for PackageCompiler.jl), and there Julia/aot is 3rd, ahead of Rust and C, only 9% slower than fastest entry C++:

What does it take to eliminate the 117ms “time(sys)” (fully or mostly), as with the other languages? Then Julia would be 19% faster than the current top entry.

https://github.com/hanabi1224/Programming-Language-Benchmarks/issues/291?notification_referrer_id=NT_kwDOAHonKLI0MTg4NDEyNjcyOjgwMDU0MTY&notifications_query=reason%3Aparticipating#issuecomment-1216960434

Without AoT, Julia is only 14th, 41% slower than fastest (but not really, since Julia’s fixed overhead is misleading).

B.
I would still like Julia faster without AoT, the extra 174 ms are compilation overhead that might be possible to reduce. For scripting (and benchmarking), a separate non-default sysimage, e.g. stripping out LinearAlgebra might be useful. I’ve been asked if any alternative sysimage is official, so could Julia include a such JuliaLite sysimage, that would just be taking up space until you enable it?

3 Likes