We somehow never agreed on what a roadmap really is, and people’s expectations for it have varied incredibly widely from developer-user communication to deciding whether to use the language in the first place. PEP was brought up as an example, and I’ll give my 2 cents on what it is and isn’t. It’s a very good place to see the big plans and there’s a very nice index of their approval status with associated Python versions. It’s not announcing 2, 5, etc-year plans, nor does it detail implementation progress. If we look at the accepted and open PEPs, some don’t have associated versions, a few outlived their associated minor version, and none of them offer deadlines. It could be because strict deadlines for unsolved problems are basically empty promises. I think several commenters were spot-on about using a tool when it’s ready to be used, not demanding predictions for production. I wouldn’t risk a job on a prediction even if reasonably argued by a developer or if a big corporation funded a big full-time team to implement all the features we want (I wish).
I also broadly agree with the idea that core developers could communicate goals and WIPs in a better way than a small fraction of archived JuliaCon talks, the rare blogposts, or a couple experimental points in release highlights. Discourse, Github, or a chatbot to regurgitate their discussions were never where core developers announced their intentions to the public, so I’m not surprised those didn’t give answers either. It’s worth pointing out that the core language and developer communications are not actually why people use languages. Most practical libraries are not on any language’s core developers’ radar, so focusing entirely on core Julia development isn’t going to help people decide if Julia is useful. Going back to Python for example, even high-performance Python users are looking at independently developed and funded projects, not CPython’s experimental and comparatively limited JIT.
You’d have to watch a JuliaCon talk to learn a couple developer’s perspectives, but the short answer to those first 2 questions is yes. It doesn’t appear to be a solid fundamental feature because it’s not. Julia was designed for JIT compilation in interactive workflows, and caching compiled code at all has been a complicated and ongoing work in progress.
Shipping a binary started with a largely independent project PackageCompiler shortly before Julia’s v1 release, and it’s not used often because it bundles the relatively massive Julia runtime and all loaded packages. It’s not even how caching compiled code works for interactive workflows. JuliaC, an experimental and very incomplete feature released just last month, aims to trim a binary and still support a large part of the language. However, there will be additional limitations no matter how much it improves; for example, removing the JIT compiler means we can’t use the archetypically dynamic eval at runtime.
I would contest the comments that JuliaC is not a core feature by this point because core developers are obviously prioritizing its development, but they probably meant that shipping AOT-compiled binaries was never part of Julia’s initial pitch or releases. It was also never part of any official description of the two-language problem, and the two-language problem does not imply or assert that we shouldn’t ever use two (or more) languages. If you read otherwise, then you were severely misinformed; anybody can write or generate a blogpost. Exaggerating the context of the two-language problem is so common, it topped my personal misconception list.
Broadly speaking, there is always work on improving compilation and performance. Nobody can centralize all the ideas and work on such a broad topic even if there’s a Github issue naming it. Currently that issue just has a disagreement about whether to keep the issue open as a searchable indicator or to close it for not being tied to a specific release (worth pointing out there are plenty of open long-term issues without an associated release). In any case, improvements will far likely be incrementally implemented in countless scattered pull requests, which you really don’t need or want to read, and that issue might only provide updated benchmarks if kept open. A benchmark with contributions from the wider ecosystem would probably be more informative than Documenter alone.