I’m pleased to announce the release of SnoopCompile 2.2.0, a major enhancement in the suite of tools for reducing “time to first plot.” This release provides considerably better insight into the costs of type inference, allowing you to profile the actions of inference in extensive detail. It also provides a new suite of tools for detecting and analyzing “inference triggers,” cases where a previously-uninferred method is called via runtime dispatch.
As a quick visual for the new release, I’ll paste the following image from SnoopCompile’s new documentation:
This flamegraph may be familiar to those of you who have used ProfileView or similar tools to improve runtime performance; the image above is a flamegraph for type-inference, allowing you to answer the questions, “which methods are being inferred with what types?” and “how long does this take?” In this case, the red bars correspond to “non-precompilable” (though there are tricks to change that!) methods owned by Julia or other packages.
There is also a blog post to accompany the release that explains most of the fundamentals. (Forthcoming posts will be expanded versions of material that’s already in the SnoopCompile docs, so you don’t have to wait…) I’m also pleased to report that I’ve used this to cut latency to less than half its original value for a half-dozen or so packages that I use, and there are other happy users as well.
This is the latest step in the long-running campaign to reduce Julia’s long latencies. While using this tool involves more work on the part of package developers than other steps taken during the Julia 1.6 release, it also provides you with greater insight into how the design of your packages interacts with Julia’s compiler.
I’d like to thank @NHDaly who kicked this off with some changes to Core.Compiler
and SnoopCompile that made this all possible.
Happy latency-stomping!