Does Julia Create a "1.5" Language Problem?

I just watch intriguing talk Julia solves the 2 language problem, however it creates the 1.5 language problem. Outside fundamental problem raised by this video “When we will be able to run Julia on washing machine?” (I’m not joking, this topic appear at one moment), I’m was thinking about about speaker idea of “1.5 language problem”, but I just lack knowledge of Julia and developing package in it to answer it (daily job in C++, with Rust on the horizon).

I want to ask, how true Julia programmers and developers evaluated points from this talk?


Hey @KZiemian, I hope you don’t mind but I rephrased your Discourse post title a little! I also added some additional tags to help with discoverability.

I watched the talk as well and I am very curious about discussion regarding the idea of the “1.5 language” problem.


~ tcp :deciduous_tree:


It’s all about expectations people have when they start porting something to Julia.

I can just repeat myself again and again: performance is NOT the main selling point, it should not be. There are many fast languages, they have all something in common: you need to know many specialties to get top performance, Julia isn’t different here.

Where Julia shines is the combination of power of expressiveness, time from prototypes to production, and last, tweaking it to performance. This development cycle is by far the most satisfying process compared to other languages in my opinion.

With the expectations set to speed only, people are disappointed at step prototyping and starting to fantasize about issues Julia has, seeing 1.5 language problems and other bizarre things. Well, this is an issue.

My guess is, most of Julia solutions out there are not at top performance but are just fast enough to not bother with deeper optimizations. Try this with the typical comparisons like Python, R or similar (Matlab I don’t know at all, so I do not mention it here). Some problems need fastest algorithm possible, no problem with Julia, but also not as easy. This is not an issue, it’s just why we still need to learn a few things to get there. Not An Issue!

EDIT: “expression” changed to “expressiveness”.


Julia needs to improve its memory allocation behavior. The ability to better control allocations, the compiler having the ability to fully remove allocations or stack allocate arrays when able to prove nothing escapes. All of it. It’s not impossible and there’s a lot of prototypes for it, but nothing has landed. This is the number 1 issue for newcomers to the language and something has to land soon. This is a no-brainer and I don’t think anyone disagrees, though it historically has not had enough priority. I personally am consistently asking for it.

Then there’s cross compilation and better binaries. This is actively underway so I don’t think there’s too much to say as there’s discussions all over the place weekly about what’s going on. Jeff’s talk is a nice overview of the problems and then there’s a lot being tried and I’m sure that v1.12 or so will start to have some good solutions. Putting it on a washing machine isn’t a joke and it’s not something to ignore: targeting embedded devices is something that is common for modeling and simulation and we as JuliaHub are doing a lot in this direction with JuliaSim, it’s the endgame for a lot of use cases of SciML.

But at the same time, I think we should level set a bit. I don’t think the perfect language will ever get rid of the difference between expert optimized code and new user or “scientist” code. What Julia can get rid of is a language barrier between the two. I think the perfect case for Julia is for a team of scientists and engineers to be able to write down an algorithm at a high level, and then pass it off to a software and deployment team that optimizes the algorithm and sets up robust deployments, but have this all be in one language. Now the former may want more dynamic behavior and the latter may want more static, the former will be writing higher level mostly math and the latter will be writing lower level optimizations and things closer to the metal. They are different cultures of programmers. But what can happen is that shift can be gradual. Today, these two teams use two different langauges. The team of scientists and engineers writes a code in MATLAB or Python and throws it over a barrier where the other team creates a C code. If a model or algorithm is improved by the scientists, a new MATLAB/Python code is thrown over the wall again, re-translated, and re-optimized. This is not efficient. The future needs to have one language. It starts ownership with the science team, but then passes ownership to the dev team. Now while it’s in the dev teams hands, the nature of the code can change, but the language will still be Julia so the scientist team can still contribute back new algorithms, which the dev team then just optimizes the new parts and sets that up for deployment. It’s a gradual process, starting with a dynamic unoptimized code and slowly making things static and optimized, locking behaviors down while still being in the same language and keeping that interopability.

That’s how I envision Julia going in the future: having a static subset, better memory management, fixing the deployment issues, and enabling one language as a cross-team process which requires different goals but all can be accomplished through one system, slowly moving a code along a spectrum of dynamic to static.


I’m still trying to figure out how to articulate the “1.5” language problem. Here’s my attempt.

  1. There is a gap between intuitive code that looks like the algorithm and performant code that will run efficiently.

  2. Work still needs to be done to go from code that works to code that is deployable.

The beginning of the talk addresses a situation where a collaborator was using MATLAB and there was disappointment that a direct port did not quickly result in fast(er) code.

In fact it probably results in slower code. While the syntax looks similar there are some important differences in semantics. MATLAB uses copy-on-write semantics whereas Julia Arrays copy eagerly by default. MATLAB also highly encourages you to go down heavily optimized official code paths whereas Julia actively encourages use of third-party code with

The notion that a quick port would automatically be fast seems naive to me, although I can see how Julia catch phrases could give that impression. The naivety posits that MathWorks has been sleeping at the wheel and could optimize the execution of MATLAB code significantly. While MathWorks has at times seemed complacent or may be purusing distinct priorities from me, I think they are also trying pretty hard to make MATLAB as fast possible, perhaps in response to Julia. For me, the miracle is that a quick port to Julia is actually possible and does work at all. That is quite rare.

My frustration with the MATLAB and Python approach is that eventually I hit a hard wall in terms of performance in spite of deploying a bag of tricks. While it can be fun “vectorizing” code or finding optimized code paths, this has it’s limits. The solution then is to build a MEX or C extension (or Cython, Numba, JAX, etc.) that creates a new “fast” path, and then use that.

What I appreciate about Julia is actually that the 1.5 language option exists. What I also appreciate about Julia is that the 1.1, 1.2, 1.3, … 1.99 language options also exist. It is possible to iterate Julia code gradually, and I can choose where along that path I want to stop. I can throw in a few @view statements, and now my code allocates less and runs faster. I could also goes as far as writing inline LLVM IR. Importantly, there is a middle ground where I can use someone’s else LLVM IR wrapped into a API as is the case with SIMD.jl or LoopVectorization.jl.

I do program in a few other languages particularly Python and Java these days. The tragedy I see there is that open source projects can often get stuck because some of the most dedicated contributors lack the skills to fix the parts involving a second language. My most recent example of this is zarr-developers/numcodecs which is wrapper for compression codecs. There the underlying compressors went without an update for three years.

The deployment problem is vexing, but this reflects that Julia is now in a very different place than where it started. The largest artifacts of this is the large “standard” library and rather large Base module. Where Julia started was a new solution for scientific or technical computing, so it made complete sense to have linear algebra available and loaded as quickly as possible. Now we are pushing Julia into some general purpose computing or even embedded applications that do not need matrix multiplication. Progress is being made in this area by turning “standard” packages into discrete but also “default” packages. In doing so there, there will now be the option to exclude these packages from a deployment.

A Base replacement may eventually be needed. As Jeff explains there is an awful lot there that may not be needed in every application. A “static” micro-Base may be needed to solve this in the future. This is not a fundamental problem, however. Prototypes such as StaticTools.jl or WebAssemblyCompiler.jl show that there are a number of potential solutions available.


In my humble opinion I largely disagree that the 1.5 language problem even exists. I watched the video and I have to say my experience is very different.

First let me agree with some of the things that have been mentioned: there will always be a difference between expertly optimized code and standard new user code or typical scientist code as Chris said. The larger and more complex the code base, the larger this difference will be.

However, there are two crucial points:

  1. This difference will always exist, in any language. Hence, if Julia has 1.5 language problem, any other language would have an at least 1.5 problem as well.
  2. For many code bases it is possible to get optimal performance with very basic Julia knowledge: type stability and avoiding unecessary allocations.

I don’t believe the basic skillset of writing type stable code and avoiding unecessary allocations counts as “half a language”. When I teach Julia this topic is in fact taught in the very first day of the workshop… And it does not appear difficult for newcomers to grasph either.

In my experience developing several Julia packages, in all of them I have achieved exceptional performance, always better than competing similar packages in other langues, without any particularly hard effort, or unreadable code, beyond what qualifies as type stability and non-allocating code. Keep in mind that I am a complete beginner in High Performance Computing. In fact I only know how to parallelize things via multi-threading - this is where my expertise stops. Yet, I am able to write performant code without much effort or needing more knowledge.

Hence, I must disagree with the claim of the video and I simple don’t see a way where taking a Julia code base and optimizing it would be adding “half a language”. Adding half a language is more like taking a Python codebase and transforming it to Cython. The differences in Julia vs “optimized Julia” are rather small in comparison. For example, there is no new syntax one has to learn to write type stable or non-allocating code.


Oh, actually, in this view, then yes I completely agree that 1.5 “problem” exists, but only as part of a spectrum of 1.1 to 1.9 “problems” :wink: And in my experience so far the “1 and 1/10 language” was able to get me where I want to be and beyond!


I’ll just post here a shortened version of what I said to the presenter when we spoke after his talk finished. As soon as he said that the client had a ~thousand lines of Matlab code describing a reaction model and that he refuses to switch from Matlab, my first though is “this guy is probably an expert in Matlab with over a decade of experience, and this code is probably very precisely tuned in order to be performant in Matlab.”

Julia is a fast language, but that doesn’t mean that a relatively new user of the language can show up and take code written by a deep Matlab expert, naïvely translate that code to Julia and expect it to go well.

If that did work, then Mathworks themselves would just download a free copy of Julia, and write a transpiler tool that translates Matlab code into Julia code, and sell that instead of their own in-house interpreter / JIT system.

Julia’s performance claims should more be thought of in the sense that if someone knowledgeable in Matlab and also knowledgeable in Julia sat down and wrote a big complicated reaction system in both languages, then I’d expect that the one in Julia would likely end up being significantly more performant, and more modular for the same amount of effort and expertise.

I’ll also point out that despite the above, many people (myself included) have experienced fantastic and easy transitions from Matlab or Python to Julia that have been all upside. But that’s not going to be the case for everyone.

My disagreements about the conclusion aside though, this is a nice case study and example of how it’s important for us to communicate and contextualize the benefits of julia.

It’s not magic, even if it sometimes does feel like it.


I don’t mind at all, your changes are welcomed. I now understand why I was thinking “Why I choose this name for the topic? I intended something else.” :grin:.

1 Like

I think speed is in practice the main, first selling point of Julia. I remember when I first hear about it on European Lisp Symposium, when Sepfan Karpinski give a talk “Julia: to List or no to Lisp”. I think first sentence that he said was “We want to create language that has easy of use of Python with speed of C++” or something like that.

IMHO, Nathan Boyer describes best in this 2020 post, what are the number 1, 2, 3, …, hurdles for many newcomers.

I’m not in embedded programming, but from what I understand such programs are quite memory restricted and Julia isn’t good in creating small binaries, even if much of progress is made. I watched Jeff Bezanson’s talk What’s the deal with Julia binary sizes? and while I happy about direction things are going is this enough for embedded systems?

While I agree that I expected a lot more to the “1.5 problem” than preallocations, such as internals or third-party libraries for optimization like in many other languages, I do think MATLAB caching and reusing allocations is a neat optimization; it would be nice if we had that in addition to heap-to-stack.

I also agreed with his assessment Julia code doesn’t look like the math. He worded it like he was “promised” such a feature, which stood out to me because I never had that expectation. I can’t be sure, but I am guessing he was referencing this old Tedx talk where Alan Edelman claims Julia code can look like the math. I always found that claim misleading because 1) placing an integral or gradient symbol into a function name doesn’t actually implement an integral or gradient, it’s just a descriptive name, and 2) the example of code that doesn’t resemble the math WAS idiomatic Julia and likely how the descriptively named function was actually implemented.

Ironically this semantic would contribute to more copies in MATLAB because unlike Julia or Python’s variables-as-bindings they lean toward variables-containing-data. So one of the first things they explain for copy-on-write is that MATLAB falls back to passing arguments by value, even copying large matrices. But there are a few scenarios where a copy may be elided: 1) the argument variable isn’t reassigned, 2) a reassigned argument variable is also a return variable, 3) the call reassigns to an input variable. (1) is reliable, (2) and (3) can’t happen in common scenarios like indexing, scripts, command line, try-catch, and eval. Passing by reference is comparatively simpler. MATLAB does have references for subclasses of handle, but that doesn’t apply to much of the number-crunching. This might be why the speaker had such a culture shock going into Julia where in-place operations are different syntax, not optimizations. Can’t say with any certainty, but I’m not entirely convinced the @concrete mutable struct was needed to avoid temporary allocations in the simpler example; I spotted some undotted multiplications of vectors there.

1 Like

Such micro-Base sounds like breaking change or at least a headache for compiler team. Am I wrong?

it’s not a breaking change as long as Julia executed normally still gets the full Base. there isn’t an especially clean way to do this, but there are a bunch of not especially horrible ways (e.g. opt in per module/per package). the main hard part is figuring out what goes where and making sure it’s not a massive maintenance burden.


You can already declare a baremodule where Base is not loaded into the namespace.

We also have an example of how this might look with StaticTools.jl.

In Julia 1.11, we already have split off Pkg and REPL from the standard library. Eventually, perhaps Base would just be another package.


I don’t have much expertise in low-level optimizations, but interesting that this topic was discussed with nearly identical vocabulary eight years ago.

What I got from that thread was that prototyping code and fast code in any language has a big “Delta”, but the advantage of doing it in one language is that the optimizations can be incrementally added rather than through a complete rewrite.

What I gather from our current discussion is that MATLAB has some semantics that enable variable caching without writing C-like code, and Julia’s compiler may also be able to partially benefit from such tricks. A mystery to me (having limited understanding of this subject) is how OCaml manages to get performance approaching C with a functional language with garbage collection, where pre-allocation and mutation is not the model for computation (and looks “more like math”).


I guess with every “new generation” we need to repeat such fundamental discussions. Thank you for posting ling to this thread, I never saw it before. I first approached Julia in the 2017/2018 academic year, so missing such things is not a strange event.


I just started Julia this year but found that thread when trying to search across various forums. I hope with continued improvements in compilation and branding, the answer to this question will be immediately clear to the next generation of newcomers.

1 Like

It’s not entirely clear to me that ‘optimized’ Julia code is more different from ‘naive’ code than in other languages. Similar issues occurs when optimizing Matlab or Python. And in C++ different dialects are dramatically different.

I could show you my ‘optimized’ vectorized raytracer in Matlab from way back. It looks extremely different from the original, scalar, implementation, and is nearly unreadable.

Similarly, numpy code is significantly different from regular Python.

In fact, my general impression is that performant Julia code often looks very simple and ‘clean’. I basically don’t understand the “1.5 language problem” at all.