I wonder, are there any concrete medium to long-term plans regarding opportunistic caching of generated binary code (separately per architecture, to support computing environments with mixed CPU generations, AVX versions, etc.)?
I know this has been discussed many times, and I’m fully aware that this is not exactly a low-hanging fruit. I’m just curious if it’s on the roadmap: When advocating for Julia, one of the first things that people notice and express concern about are package loading and code generation times (“Look, this is all very nice, but I can do that much faster in Python.”).
We do have some great mitigations in place (eternal thanks to @tim.holy for Revise!), and we make sure to pre-load our notebooks ahead of time before giving a presentation - but it’s still an issue, of course. And as packages are getting bigger and more complex over time, and as the total amount of code that people access in their applications grows, I expect that load and code-gen times will grow too, in the future.
I tell new people that Julia v1.0 is out now, and that that had to take priority, that load times aren’t all that bad, and that this will all be sorted out in the future. And I am convinced all of that is true. But sometimes, I wish I could say something at least a bit more concrete about the when and how this may be sorted out, especially when trying to convince people to adopt Julia for (or at least accept Julia in) long-term projects.