I just posted a new blog post on recent work reducing Julia’s latency, a.k.a., “time to first plot” and even “time to second plot (after loading more code)”. The blog post describes recent work on diagnosing and eliminating invalidations, events that cause Julia to have to recompile previously-compiled code. The blog post focuses on (1) explaining the underlying ideas and (2) briefly summarizing the overall progress we’ve made so far.
I’m writing this post to emphasize to package developers the third arm of this effort: the existence of tools that can be used to help resolve similar problems in your own packages. If you’re using Julia’s master branch you may already be getting some benefit from the improvements to the compiler, Base, and the standard libraries, but you can pitch in to help make it even better. This effort comes in essentially three flavors:
-
while Julia itself has made a lot of progress, there’s still more that can be done. Most of the remaining vulnerabilities seem to be in the area of string-processing, an area that hasn’t received much attention yet. Even simple packages like FilePathsBase still trigger hundreds of invalidations, and we could use one or more heroes to step forward and get this straightened out.
-
in areas where Julia is (or will become) near-bulletproof, the next frontier will be package interactions. The basic idea is you might load one package, and then the next package you load trashes some of the compilation work of the previous one. (This happens if packages do a fair amount of “work” as they initialize, or if you use them interactively and then load more packages.) If this matters to you, here there’s room for lots of heroes, because we have lots of packages. It’s also worth noting that invalidations have proven to be a smoking gun signaling opportunities for improving code, so you may well find such efforts rewarded with better runtime performance as well as lower latency.
-
one of the byproducts of reducing invalidations is that
precompile
statements work better than they used to, because the work of precompilation is not being invalidated as frequently as it once was. If you’ve not done so, during the ramp-up to 1.6 might be a good time to consider adding precompile files so that your packages let you start doing real work faster. This can help reduce inference time today. For the future, it’s possible that the work squashing invalidations will make it viable to precompile and cache “native” code, and that would entirely eliminate the cost of compilation when you first start executing code.
Now to the tools themselves. The first place to start is SnoopCompile, specifically the @snoopr
macro and associated analysis code and interactive tools. It takes a little while to wrap your head around getting to know how to use it, but especially if you’re already at least a bit comfortable with reading the output of @code_warntype
then you’ll master it very quickly. (I’ve posted a video of an interactive session that might help newcomers to this topic.) Following the trail of the invalidations quickly leads you to inferrability problems, and the remarkable Cthulhu (which now integrates with SnoopCompile) lets you figure out the origin of the problem very quickly.
A different tool in SnoopCompile, @snoopi
, is useful for measuring how much time is spent on inference and also for setting up precompile statements that ensure your package lets you start doing real work sooner. There is even a bot that allows you to automate the process of precompile-file maintenance.
A more first-principles approach is MethodAnalysis, which can survey and answer questions about the “output” of Julia’s compiler, the various compiled methods. MethodAnalysis facilitates broad analysis (the “birds eye view of your compiled code”) as well as hunting down specific triggers of invalidation that are hard to discover by other means. I don’t recommend starting here, just because it’s more of a swiss army knife than a focused tool, but it has been quite useful so far.
If folks want to take a stab at this effort but find they need help, as time permits I’m happy to offer a bit coaching, either here or in Zulip or Slack.