I do not think the article is entirely fair, while it is of much greater quality than the average article criticizing Julia.
- The comparison is done to older and more mainstream libraries. While this is a legitimate viewpoint for choosing what to use for your work just now, well, it is not fair in the more general sense. The comparison would need to be between ecosystems/languages at the same level of maturity. The more interesting question to me is if Julia will have these problems when it reaches the same age and popularity of other languages/frameworks have now.
- While I do understand the systemic argument (and, in fact, I use it to subjects like racism and discrimination in general), the examples are a little lacking. Checking for aliasing does not seem for me a responsibility of most functions (i.e., you should not assume you can pass the same object as two distinct arguments unless stated otherwise), and the bounds problem is also more nuanced. The code may be right for the Julia version it was written, but inadvertently kept for newer version. I think my disagreement is rooted on a different perspective in which I do accept the extra flexibility of generality has the cost of having me check if the pieces actually work well together instead of assuming it will work flawlessly. The element of unfairness in this comparison, to me, is that we would need to be comparing to an equally flexible language, so Python is fair (just had a lot more time to mature), other languages do not allow the generality that Julia will allow, and therefore the bugs cannot be pinned on the language itself but instead the bugs will be pinned to each individual re-implementation of a method because the language did not allow for generality. There is a lot of problems with
OffsetArrays.jl
but most other languages do not even something likeOffsetArrays.jl
or the expectation that most code written would automatically work with custom indexes.
The conclusion of the article is a little muddy. The article is a personal recollection of facts related to a change of posture of the author, so it does not go out of its way to offer a solution and even admits that what is identified as a systemic problem may be unsolvable (maybe it is inherent to high generality?). The statement ``For the majority of use cases the Julia team wants to service, the risks are simply not worth the rewards.‘’ is probably the strongest claim in the article, and it is hard to rebut, but not because it is right but because it is too informal (what are “the majority of use cases the Julia team wants to service”, how are you doing this risk/reward analysis for each of them?). Everyone can only argue for their use case, for example, in my case I think the risk/reward is worth it, but the author of the article gets to make a blanket statement like this without really presenting this analysis on their article (again, it does not even compare metrics with other languages/frameworks, so the only really solid claim is that Julia has problems, not even that it is worst than others).
I agree partially on the generality problem, by this I mean: we could have what we have today (in terms of generality) but with less bugs. I do not think the fault is at the language design, a trade-off was made, and I like the trade-off (will not be the best for every use case, of course, but no language will). I think the problem is within the community, but not in the same sense that the author of the article implies. I believe the interfaces should remain (in technical terms) the same way they are right now, but better described by their authors, and should be the responsibility of each module proposing the use of some interface (including the ones at Base
) to provide a test suite that checks invariants for an object of a type that implements the interface. If the object/type pass the test suite, but do not work with a function that assumes the object implements such interface, then the problem is within the function (it understands the interface incorrectly).