Hi, I have a larger Gtk.jl application that requires more than one minute to start up. If I press some buttons, the initial compile time is also quite high. However, if I run the application a second time everything is totally snappy. So runtime is absolutely ok.
I was wondering if it actually helps when I try to make code type stable. So is type stable code faster to compile?
Would be great to have an answer on this @jeff.bezanson maybe?
I am fine with the runtime speed of my application but initial “compilation” cost is so high. Profiling tells me that inference is the hot spot, so will an attempt to reduce type instabilities actually reduce the compilation/inference overhead?
For functions that are not runtime performance-critical, annotating arguments as @nospecialize x (or x::ANY on 0.6) might be worth a try. (In other words, sort of the opposite of the direction you were asking about.) HTH.
I’m not a dev, but one factor to consider is that if foo(x) contains a call to f(y), and the type of y is inferred, then the compiler will recursively perform inference (but not specialization) on f. As long as f(y) is going to be called, that’s fine, but if f(y) is in a never-executed branch of an if, then that’s wasted computation. Thus: type stability might (at least theoretically) lengthen compilation time, especially if you have deep parametric types.
Or perhaps that’s a marginal concern. I don’t know. We’re lacking tools to measure where compilation time is happening. My code takes minutes to compile so I share your pain. I’m currently working on a package to expose Base.Profile’s data. Presumably, we could take the difference between a warm run and a cold run to produce a report on compilation time, but the details are tricky. Any suggestion welcome.
SnoopCompile is in principle great but it currently does not work (on 0.6) and more importantly the issue is that precompile between different modules does not work. But its certainly the path forward.