Jeff Bezanson’s talk What’s Bad About Julia from JuliaCon 2019 is one of my favorite Julia talks every and the most funny, period. Is there any chance that we see something like updated version of this talk in any shape or form? We now have Julia 1.6 and 1.7 should be soon available and I would love to here some more substance griping about Julia in the great style of Jeff Bezanson.
imho: this is the latest “State of Julia” ( as a roadmap checklist )
" State of Julia 2021" ( 29 Jul 2021 ) presentation:
(Stefan Karpinksi, Viral Shah, Jeff Bezanson, Keno Fischer)
Julia “Still to do list”
Very true, but it just didn’t have the same character as “What’s Bad About Julia.”
I already watch this year "State of Julia” one or two times. It points a lot things that they plan to do, this part about allowing easier way to hack compilers blow me away so much, that when I felt down it was already next week. I agree with @MillironX that it is not second part "What’s Bad About Julia”. First of all, Jeff Bezanson didn’t gripes enough, second “WBAJ” was about things that is almost unseen by people, if you not part of core team. At last, “State of Julia” didn’t end by saying “After all Julia isn’t so bad”.
Tone of this post is quite humorous, since I’m in the mood of “WBAJ” talk, which is so funny.
With all of the AI/ML work going on in Julia, soon we will need to ask what Julia thinks is bad about Jeff.
Here’s an alternative take on exactly the same topic.
https://viralinstruction.com/posts/badjulia/#whats_bad_about_julia
It’s not by Jeff, and it is quite critical, but coming from a fan and user of the language. I haven’t seen this blog post discussed here, and I think there are some interesting points, particularly concerning the type system, though I feel underqualified to form an overall opinion on it.
You can see a discussion on hackernews.
There was no dedicated discussion on discourse.
For me, latency is the #1 problem.
As an example, I find the blog post advertising Julia’s CSV.jl is 10-20x faster than alternatives in Python and R to be grating, because I would prefer any of these R or Python alternatives over CSV.jl for performance reasons.
For the sizes of CSVs I work with, the R and Python options are essentially instant, yet in Julia I need to wait 10 seconds each time to compile.
Latency is a problem we should be taking more seriously in the package ecosystem.
Latency is probably the biggest single bad thing in Julia. Jeff Bezanson’s talk mention is only slightly, since he says something like that. “Latency is a well known problem, but this presentation is about problems that most people don’t know”.
CVS.jl is quite problematic to me, for this reason. Three weeks ago I was doing Julia Academy’s Julia for Data Science and find regression in performance of CVS.jl on the test set. Julia have a lot of moving parts and this can hit performance in big way, so I don’t blame personally for this problem. Especially because I can’t help with improving performance in any way.
I recently open one Discord discussion about Jan Vitek ideas for removing some latency, but while I eager to know a big picture, I can’t do anything to help fight this problem.
Does serious discussion about latency in ecosystem is ongoing or we should start one? Again, in my opinion with may current knowledge of Julia and Git I can’t help with this issue.
It’s probably best to focus such discussions on github issues for the related packages, such as this one for DifferentialEquations.jl.
As that issue shows, dramatic improvements are possible in some cases. 22 to 3 seconds is a big win.
But solutions can be brittle. All it takes is one invalidation to get back to 22 to seconds, meaning if you load one more package, you could undo all the benefits, or be even worse off than before.
Latency caused by LLVM instead of inference also cannot be cashed, so packages making a lot of use of StaticArrays and ForwardDiff have limits on how much they can really be helped.
Have you had any joy compiling CSV.read() with PackageCompiler?
I never do that. How it works?