Why are macros re-pre-compiled

I am trying to improve load time of a package of mine.

I found that a lot of this time was spent type-inferring macro-related functions, for example:

  • Through using @match (Match.jl):
    Match.gen_match_expr(::String, ::LineNumberNode, ::Expr), etc

  • Through using @showprogress (ProgressMeter.jl):
    ProgressMeter.showprogress(::Float64, ::Vararg{Any}), etc

So, I added dummy uses of those macros to the precompilation function of an upstream, less-frequently-updated package, and confirmed with @snoopi_deep that all those macro-related methods were indeed type-inferred on using UpstreamPkg.

But! After running using UpstreamPkg, and then using @snoopi_deep on using DownstreamPkg¹, all those methods were type inferred again.
What could be the issue?

I checked with @snoopr whether my code or its dependencies introduced invalidations in these macro packages, but that didn’t seem to be the problem.


¹ This DownstreamPkg is the VoltoMapSim pkg from the previous post.

To clarify, are you saying that the macro function itself is being re-inferred?

EDIT:
I don’t understand why the macros would need to run when you load the package—they should be “done” when the package is being precompiled. That is to say, the only thing you should need is the result of macro expansion, and don’t need the actual macro code itself. So there’s a bit of a mystery here.

Original:

If I had to guess, this is what I think is happening: the macro code itself needs to be compiled, but there are no backedges linking the macro code to any method owned by your package. Thus, by default the macros themselves are not cached in a *.ji file. The easy solution: we should add precompilation to Match.jl and ProgressMeter.jl. A simple workload exercising, e.g., @showprogress before close-of-module for ProgressMeter would likely do the trick. Want to give that a try and see if it works?

If it does and you submit a PR, ping me here; I’m not really monitoring my packages right now because PR review takes boatloads of time, and I’m spending my limited Julia dev time on deeper TTFX issues that should benefit all.

2 Likes

Thank you for your reply.


Turns out the problem was __precompile__(false) in my downstream package.

For some reason, that caused the macro-related functions to be type-inferred on using DownstreamPkg.

Without it (i.e. the default, do precompile), the macro-related functions do not show up in @snoopi_deep.



On:

…that was not actually true.
That was only the case for @snoopi_deep include("src/UpstreamPkg.jl"), not for @snoopi_deep using UpstreamPkg.



So in conclusion, all is good, and do not turn off precompilation for a dev package.

(My reasoning to turn it off was: I restart the IJulia kernel often and modify the package often; after a kernel restart, the entire package is re-precompiled if I touched just one of its files. But that is actually fine, precompilation (without a custom precompile load at least) is fast enough).

2 Likes

Which one? That was the default then changed, so I assume this was added to fix something. I’m just curious when you have to do this, what needs fixing. Is it simply old code that needs to be removed?

You might want to run julia like:

julia -O0 --inline=no

I converted one (Common Lisp) user to use Julia instead using that trick, he was complaining about Julia not as interactive, and recompiling code for 10+ sec. I’m not sure it involved packages though. Inlining is better for runtime speed, why the default, but worse for dev (compilation latency, well all optimization so consider also --compile=min), and I guess -O0 disabled it, just not sure.

If this works well, I suppose an option or default for VS Code for this might be valuable. It would just have to be clear that it’s done, and how to turn it off.

I’m not sure I follow what you mean?
I added __precompile__(false) to my top-level package (‘VoltoMapSim’) for this reason:

As to the reason for why the session needs to be restarted: sometimes a package or your own program hangs (an infinite loop/recursion by accident eg, or multi-threading issues). Or you want to run the jupyter notebook or script ‘from scratch’, top to bottom. Or, very common: you updated a package. Or you redefined a struct (and forgot to use the Revise.jl temporary renaming trick). Or you want to move code from the notebook/script to a package (while using the same names).

The command line flags are interesting. In a quick test they don’t seem to give much of a difference? It also feels a bit icky to leave runtime performance on the table, for when you need it.