Julia: a post-mortem

I think that Go development atmosphere is more professional so there is trust in their potential in handling the barriers they’re faced. sorry for my english


Exactly, In addition, for example, I do not know now whether NodeJS is considered separately or with JavaScript

Got it: if I get it, you question their judgment about how to design the metric because they don’t seem to really understand all use cases for each language in the index. (And so maybe they’re missing something in the design of the metric). I’m not sure that’s the standard I would use to judge a metric, but I think I see why that could matter to someone.


You don’t need fully static compilation for that. In fact, it’s perfectly possible now to write a little C library with C API stubs for calling a Julia library, and then link your code in some C-compatible language with -lstub -ljulia and #include <stub.h>.

It will load Julia code and JIT compile it under the hood, but many applications don’t really care how the library they are calling works as long as it works. A more serious annoyance is the long start-up time, which is related to the “time-to-first-plot” issue that is a high priority for Julia development…and which will eventually be solved by caching more compiled code and other standard tricks; it’s not rocket science, just a lot of plumbing.

Fully statically compiled code (with no JIT compiler present at runtime) is a harder problem in a dynamic language, because then you really want to guarantee that you never hit any code paths at runtime that have not been compiled yet. But it’s not clear to me that this is really required — people care about having something be easy to link and call, and care about it being fast, but don’t typically care (or understand) whether a JIT is hidden underneath.

(Tons of people use GPU libraries without realizing that there is a PTX JIT involved.)


SymEngine is an example.

Hmm. I am aware of at least one horror story installing on Windows. But on “supported” platforms it may be a different story. However, compared with Julia itself the installation is definitely fiddly.

1 Like

For myself, compiling a shared library (.so) for Linux is brittle and unsupported. I did have this ( sort of ) working with older versions of julia after quite a bit of yak shaving effort. Then things changed with 1.5 /PackageCompiler and broke the system I did have. This was very frustrating. Although it didn’t matter all that much because I couldn’t really dynamically load those libs due to LLVM namespace conflicts in the host application.

Those shared libs if you are able to build and dynamically load them are very large.

With C++/gcc you have great control over size and linking of a shared library.

Python is more seamlessly integrated into many host environments simply because many host environments directly embed it as a scripting language or have it as a default install. Many popular C++ libraries come with a python api.

1 Like

This generalization applies to a narrow subset of software. Having a garbage collector, let alone JIT, disqualifies Julia for use in many categories of software.

Having said that, easy to link and call would be wonderful and drive adoption in software applications beyond Julia’s numerical/scientific computing roots.

1 Like

I think that people reading this thread should be aware of some deceptive behavior going on here:

For context, @dariush-bahrami is the person who became upset when a thread they started the other day was rate-limited in order to keep the discussion from getting too heated, and eventually closed, when it did become too heated. Since then he appears to be on a bit of a mission to bring negative attention to the project.

Sock puppetry is generally considered bad in internet forums for reasons that are quite well demonstrated here: sock puppets tend to be used by people to make it appear that more people agree with them than actually do, when, in fact, it is just one person dishonestly amplifying their own voice. In this case, the sock puppetry was fairly easy to detect using information available to admins (which is exactly why it’s available to them).

In addition to this deceptive behavior on Discourse, there is more. The recently posted article “https://hackernoon.com/is-julia-a-misuse-of-free-software-in-the-name-of-open-source-wy20354r” which tries to “cancel” Julia’s discourse moderators for, um, moderating, is also very likely written by @dariush-bahrami using the alias “Henning Rousseau”. How can I tell this? When you copy a link from Discourse, your user name is embedded in the link you get. Guess what user name is embedded in the Discourse links in that article? You guessed it: @dariush-bahrami. I have saved the current content of the article here since it seems likely to get “fixed” once I post this. It is possible that Henning Rousseau is a real “free software activist” as his bio says, and that @dariush-bahrami sent him these links, and he decided to write this article using those links. In that case, however, Mr Rousseau is not much of an online activist since his entire internet presence consists of this one article published yesterday. It seems far more likely that Henning Rousseau is a fake person invented yesterday as an alias under which @dariush-bahram could publish a critical article about Julia.

Of course, none of this chicanery invalidates the points that are made in this thread or in the article. Fake people can make valid points. I should also point out that Chris von Csefalvay, the author of the article linked in @mashgholam’s (aka @dariush-bahrami) original post here, is very much a real person and not a sock puppet. So please read what they say and see if you agree or not. It does, however, only seem fair that people reading this should be aware that they are being manipulated into believing that more people are agreeing with these positions than actually are.

@dariush-bahrami and @SadeghPouriyan, in case you were not already aware: sock puppetry is not acceptable here, especially for the clear purpose of deceiving people. Please stop.

p.s. I suspect that the faces [1] [2] [3] used for these three fake identities are made by a GAN, but they’re pretty good and low resolution, which makes it a bit hard to tell. Anyone know how to detect GAN generated faces?


37 posts were split to a new topic: Sock Puppetry Dogpile

I agree. I’d really like a Julia static compiler that I could just point at a pile of code and have it do a whole program optimization like MLton. I realize that’s not the workflow that Julia targets, but I’m greedy, I’d like it in addition to the JIT compiler, to deploy static binaries.


We can have a discussion about the technical points, but I think it will be easy to get consensus about the health and quality of Julia's community.

Yes. I think Julia is particularly suitable for a person with some knowledge in C/C++. Then, it is easy and intuitive for him/her to realize why high-performance Julia code should be written according to the recommended rules.

I don’t care what others say. I only know Julia saved my a$$ for PhD study, from C++ nightmare (I am not bad at C++. I just hate it to the max for scientific computing)


Do you mean using JLL packages here, or embedding Julia in an application through libjulia.so?

This I don’t get, as apparently you have a host application that already depends on LLVM (presumably dynamically loaded) as you mention the LLVM namepace conflicts? LLVM is mostly likely the largest shared lib in the app, right, coming in at around 90MB for libLLVM-11.so on my Arch Linux system. But if you write anything that embeds LLVM then how are you going to control the size of the LLVM shared lib? Other than stripping any debug symbols there probably isn’t much that can be done to reduce its size. And are other (Julia-produced) shared libs really of the same size? I’m just trying to understand your particular software architecture and how shared libs and size of dependencies fit into that.

True, Python is much more commonly used as an embedded scripting language. And when included in a standalone binary distribution will most certainly be smaller in size (even when including dependencies) than is the case for Julia.

Compared to new languages like Go and Kotlin, why hasn’t Julia been able to attract so many people? What keeps people from using it?

Go was released in 2012. Kotlin was released in 2016 based on an old an well established platform, the Java platform.

Julia is an entirely new platform first released in 2018. I think it is a bit much to expect same traction as Go. Go has been out for 4x as many years as Julia.


I’m curious what you mean with “first released” and “out” here, a public version designated 1.0? As Julia was released in below-1.0 versions long before 2018. According to their respective wikipedia pages:

  • Go design started in 2007 at Google, was publicly announced in 2009, with version 1.0 released in 2012.
  • Work on (what was to become) Julia started in 2009, was named “Julia” in 2012 and had a first public release (0.3) in 2014 it seems, with version 1.0 released in 2018.

Sure, Go got to a 1.0 release much sooner and but probably less ground-breaking (from what I’ve seen from it) and more based on Google’s keep-it-simple and product-oriented philosophy. Creating Julia probably involved a lot more research, with published papers to show for it. Plus having a bunch of full-time Google Engineers work on Go helps it grow faster.

But I think one of the main differences is that when Google announces a new language people will pay attention, because it’s Google saying it. Whereas a language that originated from an academic setting has an inherit disadvantage, as a lot of folks don’t have much connection to science and research anyway. So it needs to prove itself much more than being accepted as a viable option right from the start.

The above is partly my perception, by the way. Would be interesting to hear the Julia devs insights on these things.


I’m curious what you mean with “first released” and “out” here, a public version designated 1.0? As Julia was released in below-1.0 versions long before 2018. According to their respective wikipedia pages:

For a fair comparison I think version 1.0 is most natural. One could say a lot about this but Julia prior to 1.0 was in constant flux and stuff would often break. One cannot expect broad adoption under such conditions. I have some experience with this as I released a video course based on Julia 0.5 on Pack publishing. Going to v1.0 quite a lot of those code examples broke.

But I basically think we are in agreement? I am onboard with most of your analysis. Being from Google makes a big difference. But I think a multitude of things gave Go an advantage:

  1. It targets server/systems programming which is huge. Most jobs out there have traditionally been Java, C# and JavaScript. A lot of the Python jobs are for the same thing. Julia at the moment is really replacing R, Matlab and Fortran which are all much less known and have smaller market share.

  2. It was probably easier to market. Everybody knows concurrency is hard and would be willing to look at any language “fixing” that. Lots of people in contrast would object to the notion that Python or Matlab has a performance problem. They will just go “Use C++ code for those parts”. Concurrency doesn’t have that kind of “simple” fix.


For those wondering why this was closed, the thread devolved into a bit of a Dogpile. I have split those posts out and kept any posts germane to discussions related to the original article. I think it would be fine to reopen this for further discussion, but given the devolvement here, I’ll leave it closed for the moment. If people feel there’s useful discussion to be had, let’s do so in a new topic that’s is not tainted by whatever background is going on in this one. Of course, I’m sure there’ll always be another article complaining that were not growing fast enough, so you’ll definitely have your chance next time around :).