Did Julia community do something to improve its correctness?

I believe what you say about CPython. In fact a simple glance at Numpy shows over a thousand outstanding issues. But that doesn’t make Julia’s position any better, nor will it convince anyone on edge about the topic at hand. I think one advantage of having huge monolith packages is the need to not worry about how the package interacts with the rest of the ecosystem as much.

I guess one of the examples I point to was something first discussed years ago: incorrect gradient bugs in certain Julia ML packages. There were multiple stories of people spending months trying to debug, until finally they realized that their code was silently giving wrong results. See here and here for examples. If an AI startup ran into something like this, well, their competitors are potentially months ahead in development now because they used Python. I can’t recommend Julia for ML in part because of this. Do incorrect gradient bugs occur like this in Python, or C++? Seems particular to Julia to me.

The overarching theme remains: it is the attitude to which these bugs are approached that is the issue (the “culture”). In a post above, it was mentioned most bugs tagged with correctness aren’t release blocking, and it begs the question, why? I don’t buy the reason that they’re corner cases few run into…after 10 years bugs that show up better be corner cases, right? Maybe it’s a communication issue on what the priorities are or how they’re determined - I have no knowledge of the inner workings of the repo and neither do most other end users - but it seems odd that priority isn’t given to issues that silently give incorrect results. To me correctness is far and away the most important issue, not improved performance, and it doesn’t seem like everyone agrees.

I watched the state of Julia talk given this year and I saw 0 mentions of correctness issues. Maybe I missed something, but it seemed like the perfect opportunity to address an elephant in the room.

13 Likes