Roadmap for a faster time-to-first-plot?

I like the ideas (towards which I was also thinking in this post) of having an interpreter that checks for compiled versions of methods before interpreting them. It could then, if desired, JIT-compile yet unknown methods after their first interpreted execution, in another thread or when idle anyway.

Wouldn’t this also solve @cce’s trade-off? Like this, already the first usage of a new query would be fast, and subsequent ones have precompiled speed…

@ChrisRackauckas, it could be interesting to explore “how to write composable AOT-compiled building blocks” and how they can be composed at runtime without JIT. I presume this already works combining PackageCompiler (or similar) and --compile=no?

Isn’t there any way to distribute compiled binaries? It’s the norm to distribute the binary for complied languages like c++ and rust, why does Julia has to distribute source code?

where do you get the impression that Julia distribute source code (only)?..https://julialang.org/downloads/

or do you mean the fact that stdlib is written in Julia?

Julia’s AOT compilation model requires that for generic code, the AST of each method is available. Delivering self-contained executables is work in progress, but binary forms of packages do not make sense for general use.

7 Likes

I see. I only know the very basics of the most basic knowledge about compilers, but I wonder whether it would be possible and helpful for solving TTFP to compile most commonly called signatures of certain functions when building packages and distribute these compiled codes (either LLVM IR or assembly code for individual architectures) along side the source code? This way the compiler doesn’t have to compile all the way down during the first invocation.

Just an update for everyone in this thread. After the first time to plot analysis in

established that it really is all inference time (and hence solvable from Julia and not an LLVM issue), a new research software engineer has started in the Julia Lab for working on this problem. We have decided it’s probably best to probably not try and solving this specifically for Plots.jl, but more generally. We had a meeting with a bunch of compiler folk and it resulted in these issues:


So I’m not going to promise anything quick or soon (I’m not even a compiler guy! Just doing what I can!), but those are some concrete ideas for things that would reduce compiler latency for all users, and hopefully handle Plots.jl. If you’re interested, I’d follow those and cheer on any developments. This will take some time, but hopefully it can be fixed the correct way and all packages will benefit from it.

And as always, even though there’s a concrete plan, everything is always limited by resources (time, tests, money, etc.). If you do want to help out, there are likely ways to pitch in, so don’t be afraid to ask. For example, maybe one thing that could help is to put together a script/repo that runs all of the known compile-time explosion examples to make it easier for the compiler engineers to run and test changes.

58 Likes

Maybe looking at how other languages implement tiered compilation can be useful too, e.g.

https://hacks.mozilla.org/2019/08/the-baseline-interpreter-a-faster-js-interpreter-in-firefox-70/

https://devblogs.microsoft.com/dotnet/tiered-compilation-preview-in-net-core-2-1/

heard about this from Alan yesterday, didn’t know it’s a new recruitment, nice!

This is so exciting. This will have a major impact on Julia if it becomes reality. I need to excuse all the time when using Julia in production that it is so slow.

5 Likes

I’m curious, why Julia doesn’t compile functions only after they are executed several times in interpreter mode? Is there a not obvious downside, or this is somehow incompatible with the language itself, or just too difficult to implement?

Well, one reason might be that the interpreter is only available for a few months…

6 Likes

The other issue is probably that the function being interpreted (possibly while compiling in the background) could be the massive_neural_network_train() or alike that you’ll call only once and it could take days even if it were compiled (and it would have compiled in less than a second), and once started executing you can’t really switch even after compiled (though if most of the processing is done in other functions that are repeatedly called it wouldn’t be so bad, though that will become another performance gotcha in which a small difference will make the program thousands of times slower).

You could have some flag like @compile which could force the compiler to never interpret such functions, and/or use one of the heuristic mentioned like check if the function has loops, but it would still require a somewhat fast and reliable interpreter.

1 Like

There’s progress with more precompilation, which was recently merged into Plots.jl master.

I should also point out that if all you want is some basic plot types supported directly by GR.jl, you can already get a first plot within 5 seconds (and lesser with precompile statements).

-viral

28 Likes

the new release is qualitatively faster + nicer to use

3 Likes

Indeed Plots 0.27.1 is the first release with the latency improvements (and more to come!)

17 Likes

So pumped to try this tomorrow!

1 Like

UnicodePlots is also very fast if all your need is bar plot or simple line plots

4 Likes

#edit previously I claim that PackageCompilerX is abandoned, but I am totally wrong, it is under active development. Since I am not a compiler specialist, please forget what I have said and forgive my ignorance.

C++ is AOT so it is simply solved by first compiling and then running the binary. And compile times for template heavy C++ code are pretty bad.

4 Likes

Please consider whether you really have the depth of knowledge to lay out the situation here. Things like claiming a package is abandoned without knowing it is highly counterproductive (you claim PackageCompilerX is abandoned, which it definitely isn’t, the kc/wip branch was last committed to 15 hours ago - on the contrary it represents one of our best shots at real progress here). And concluding comments such as “I am sceptical of this approach” and “But I think we can’t gain much from this method” adds no new information and does not help the discussion.

2 Likes