It is similar in R especially Bioconductor packages, don’t know for Python. But for me the huge advantage of Julia is here that typically packages are written in Julia (and not in C, like for R). So, this specific property of the Julia eco system, makes me prefer Julia over other languages, because I can solve my specific problem partly with existing packages and from there I can proceed much more easily.
This is my main personal “Julia solves the 2 language problem” (among others).
Of course Julia has some issues (more or less depending on taste), but it also has quite some advantages. Like every language. In this aspect, the conclusion “I no longer recommend Julia” doesn’t make too much sense, as nobody recommends any language without some "But"s, because at the end, there is no perfect language (yet). I am pretty sure, that the main and most serious issue, that “Julia results are silently incorrect (OffsetArrays as example)”, can be constructed for every language and it’s eco system.
Not saying, it shouldn’t be addressed, just saying, it’s likely to find such issues in younger languages. Concluding not to recommend this language in a general sense is overly dramatic. There are languages which shouldn’t be recommended for real world tasks in general, but Julia is not one of those esoteric languages.
Yes, the response of the community to resolving filed issues has been fantastic, but my point was more about finding the issues in the first place, in which Yuri seems to be the one taking the lead. I wish more people would participate in this.
Haha that’s awesome, thanks for putting in the time. Unfortunately it does end up being whoever feels the strongest about problems that has to fix them a lot of the time.
I do think people are responding to the post seriously, but open source is slow and we’re all busy.
For example I’ve got the start of an Interfaces.jl package written to formalise interface testing (or at least provide some ideas for it), that was the part of the discussion I felt the strongest about. But it will be a few months at least until I can actually register it.
Very well written article! I do have one point of criticism – the description of object-oriented programming. It is such a common mistake to make, given that not very many people are familiar with older OOP implementations. Common Lisp as an example, does not couple methods to classes. It behaves in a very similar way to that of Julia, except multiple dispatch methods in Common Lisp define behavior for classes, not types. Additionally, the rules in how the most applicable method is chosen can be customized and extended with the meta-object protocol. Infact, the entire OOP system can be extended in this way, in the same way macros allow you to extend the language’s syntax. Mind you, Common Lisp’s object system is more than a decade older than Java, and more than 5 years older than Python – two extremely popular object-oriented programming languages. Object-oriented programming does not imply methods belong to classes; it simply means that collections of data can be represented as user-defined constructs that can be extended through inheritance. This more general description covers more languages than Common Lisp, too.
Julia does take it a step further though, and declares that every function be a method, and that they operate on types (which have an infinite domain as compared to classes).
Overall, great article. I just wanted to point this out because it is something I see all too often that bugs me, as a former Common Lisp developer.
That’s why I sticked to the Java, C#, C++ definition of OOP. Most people think of that flavor when talking about OOP. I’ve considered adding some footnote in the blog about that OOP is not strictly correctly implemented in modern languages, but that seemed to distract from the main story. By “correctly implemented”, I mean implemented according to the definition by the original creator of OOP. There is a nice talk on YouTube somewhere by the original creator. Maybe it was on a Smalltalk conference? He tells how everyone got OOP wrong. Can’t find it now, maybe you know the one
One lives and one unlearns, apparently. Yesterday I knew the difference between a type and a class, and had a vague idea about what interfaces mean. Today, after reading a bit more about them, I have not the faintest clue about any of them.
I really wish the exact meanings of programming lingo remained the same across all languages and contexts, but it would be unfair to expect some primordial language theory to formally delineate every weirdly specific concept some future language comes up with.
It does appear that assuming 1-based indexing is all over the place and isn’t always marked with require_one_based_indexing(), e.g. begin + n - 1 only works for n in 0:typemax(Int) if begin is 1.
I want to remark on an interesting complication for the task of hunting down 1-based assumptions in Base, StatsBase, and elsewhere. require_one_based_indexing() shows up in a lot of methods where there are >1 input arrays, and even if one were to rewrite them to start at firstindex instead of 1, there’s still the issue of what to do with the inputs’ offset indices. For example, take the addition of matrix A and matrix B, yielding a matrix C. Let’s say A is 0-based and B is 1-based. This isn’t one of those offset indices applications where aligning A[1,1] and B[1,1] makes any sense, so we basically do A.parent + B.parent. But what should C’s indices be? 0-based, 1-based?
I would argue that at the most, such methods could be sensibly relaxed to calling a hypothetical require_same_indices(), and that restriction makes it unreasonably clunky to use OffsetArrays. At least StaticArrays do not support a conventionally obvious interface method (setindex!), so you know to avoid mutating! methods; it’ll be harder to check if a method uses require_same_indices() in some really nested method call.
One thing about this discussion and the original blog post has been annoying me—the use of the phrase “correctness bug”. I view that phrase as a melodramatic and alarmist way of saying “bug”. Aren’t all bugs “correctness bugs”? Obviously code can have performance issues, but I hesitate to call performance issues “bugs”.
Julia has a total of 3,383 open issues (Python has 6,773), but presumably not all of them are bugs. The bug label has not been applied as consistently in the Julia repo. Only 215 issues have the bug label, but there are probably more than 215 open bug issues for Julia.
So, the number of open bugs in Python and Julia is probably pretty comparable. Yuri’s blog post, and some of the comments in this thread, take an alarmist tone that implies that the Julia language is unreliable to use. Now I’m perfectly willing to admit that the Julia ecosystem probably has more bugs than major Python packages do, but I think that’s primarily because popular Python packages have many more users and many more developers.
I don’t think so. There’s a big difference between “Calling X with inputs Y produces an error” and “Calling X with inputs Y returns an incorrect result”. Rereading the blog post, the latter is what he calls a “correctness bug”.
Those bugs are scary. His point is that because types like Number or AbstractArray are not crisply-defined concepts at the language level, different packages can make different assumptions, and that leads to “correctness bugs” (i.e. incorrect results), perhaps in a way that doesn’t happen (as much?) in languages that don’t encourage/enable interoperability as much as Julia does.