What steps should the Julia community take to bring Julia to the next level of popularity?

Besides that, I think there’s just a refusal to accept that the Julia ecosystem often fixates on the wrong things. The Julia community is perfectionist when it comes to design, but not when it comes to execution. In some cases, sloppy execution in the ecosystem can be put down to a lack of investment (e.g. the statistics or ML ecosystems). But in other cases, there’s been far more than enough investment.

Stan built a world-class autodiff system with a team of two people in about a year, and TinyGrad is now mostly usable after 2-3 years of work by one programmer. Why have 5 years of research, including several paid full-time engineers and PhDs, failed to do the same for Julia?

From what I can tell, the problem in these cases is Julia users have the mindset of mathematicians or academics, not software engineers. If a mathematician proves a theorem but forgets to exclude the trivial case x=0, they think of this as a minor error, because the idea is correct. On the other hand, adding unnecessary conditions to the theorem is a big problem. So is providing a proof that is far too long, or otherwise inelegant. In Julia, building an autodiff system that is less than fully general, or with a design that is theoretically “incorrect” (tracing rather than source-to-source), strikes most people as boring.

A programmer thinks of this as a major error because the users inevitably complain after they plug in x=0, or because a fuzzer catches it and blocks them from merging. Bugs are not acceptable, unlike missing features, or lack of generality. PyTorch and Jax have far more limitations than Zygote (or at least did, before PyTorch 2.0. But in return, they were reliable and fast 3 years ago, while we’re still struggling.

It’s very reminiscent of writings on why Lisp never caught on, e.g. worse is better. (Although I think by now, we’ve proven that correctness is just as important as implementation difficulty.)

10 Likes