Julia motivation: why weren't Numpy, Scipy, Numba, good enough?

This conversation may be a little dead, but I was also stumped by the “why not numba?” question when I gave a talk on Julia recently. So, I took the benchmarks from test/perf/micro and for all of the python benchmarks I added @jit decorations to the functions. This worked fine except for parse_int which gave me some error that I didn’t understand. After running the benchmarks, this is what I get:

I am on a 2013 macbook pro with an intel i7 quad core (2.6GHz) and 16Gb of ram. The Julia benchmarks were on 0.6.0-rc2.0. As for Python/Numba, we have Python 3.5.2 and numba 0.33.0.

This shows you that Python is definitely outperformed by Julia, but when the Numba compiler works, you can write Python code that is within a factor of three of Julia performance.

There have been many interesting points made above about Julia vs Python/Numba as languages, but in terms of performance it seems that Numba can be quite competitive in tasks that are important in the field of numerical computing.

1 Like

IMO the best answer to that is “because of the two-language problem”. Which is more of a “1+τ language problem with τ∈[0.2,0.5]” for Numba/Cython, but that does not exactly roll off the tongue.


A factor of 3 is a huge amount. It’s the difference between 30fps and 10fps.


[quote=“JackDevine, post:42, topic:2236, full:true”]
This conversation may be a little dead, but I was also stumped by the “why not numba?” question when I gave a talk on Julia recently.[/quote]
In three benchmarks Numba doesn’t perform better than pure Python (+ NumPy), in the remaining Numba-powered benchmarks I’d say that performance is more within a factor of two of Julia performance, rather than three.

That said, I don’t use Python (let alone Numba), so please do correct me if I’m wrong, but the answer to the “Why not Numba?” question should be “generality”. As far as I understand, Numba works well only with standard data types, is this correct? Instead, Julia has no performance penalty with custom types and the interplay between different custom (and well-designed) types is often very easy, without the need of specific glue code (and as far I can see this is very often false in Python).


Numba now has the capability of compiling a whole class with the @jitclass decorator. I was going to try it out, but it requires quite a lot of typing for large classes, so I gave up for the time being.


Yes, generality is the key. Numba can get you going for Float64. And it can now, in very limited cases, get you JIT compiled objects, but that’s not even getting close to Julia’s types. I believe you cannot write your JIT functions to auto-specialize according to input types, so generic programming is out of the question. This only works on a small class of objects with limitations.

But more importantly: Python does not have a good language for talking about types. Numba is trying to bolt on features that get it closer to Julia, but it’s missing the language which makes it easy to actually use and discuss. For example, Julia has parametric types. Parametric types can be “infinitely many” different JIT compiled classes, and many times you want to dispatch differently depending on these type parameters. With what exists in Numba, you technically “can” do it, but it’s all manual and at that point you might as well be writing C++ (or… Julia).

So all of the examples that people give for “but Numba can do it” tend to be "here’s an example looping on Float64". Yes, bravo, Numba got that. But what I have been finding out in my journey with Julia is that, that’s the simplest case (and you might as well just use C/Fortran if that’s all you want).

What’s interesting about Julia is that same exact code is efficient for arbitrary arithmetic, or AbstractArrays (and gets you auto-compilation of your functions to GPU variants using GPUArrays for example). Numba can keep bolting on what’s needed. It has bolted on stuff for GPUs. If they see that people are using Julia’s generic algorithms with DistributedArrays, they can make a distributed array and change the compiler so that case will work. But Julia is designed correctly so that way these kinds of specializations aren’t “top-down”: no compiler changes are needed. You can do all of this by adding a package.

In the end, I am sure that with enough work Numba can keep trying to keep up with “the standard use cases” of Julia that are beyond Float64, but it’ll still be in a language that has no way to discuss what it’s actually doing, and it’ll still be “given to you” by compiler changes in Numba itself, and making changes to add stuff like this won’t be accessible to standard Python developers without compiler knowledge.



Which is more of a “1+τ language problem with τ∈[0.2,0.5]” for Numba/Cython

:laughing: That is a hilarious answer, I wish that I was fast enough to think of that at the time!


A factor of 3 is a huge amount. It’s the difference between 30fps and 10fps.

Good point, I guess that calling Numba "competitive " was a little too much. But it seems to me that Numba is doing pretty damn good, especially on the fib and the pi_sum benchmark.

@ChrisRackauckas , Yes, what you are talking about fits in really nicely with the language discussions above. But the thing that is starting to resonate with me now is that the language differences and the benchmarks shouldn’t be thought of separately. Once you look at the big picture, you really start to understand the difference between Numba and Julia.

Agreed. I see that many people point out only (or mostly) performance, when explaining Julia’s features, but this is more a feature of the underlying compiler (of course many people worked hard on Julia’s side to make this possible!) that can be matched in other languages with some tweaking (see Numba). If we were to judge a language only by its performance, we’d be all using Fortran with no doubt, but of course also other aspects (ease of use, paradigm, package ecosystem, productivity, etc…) matter.

I believe that more convincing Julia’s selling points are its syntax, the multiple dispatch paradigm, metaprogramming capabilities, and the type system, that are more specific to the language. All of them combined with good performance.


But at the same time, those things are very hard to quantify. So the situation is roughly the following: you cannot convince people about the benefits of a language until they have used it. The weak form of that is that you cannot convince people about the benefits of a language unless it has a feature that is sorely missed in everyday practice.

The most enthusiastic converts to Julia’s type system are scientists who have spent a lot of time optimizing code in a low-level language called by a high level language, then discover that they have to rewrite (retest, redebug) the whole things because of a minor type change.


I strongly disagree with this, having dealt for ages with a language whose very design made certain optimizations simply not possible (for example, because anytime a function is called, any of your “local” variables might be changed)
One of the beautiful things about Julia is that good performance comes naturally out of the design.
Even further improvements in performanc can be acheived as the implementation can still be improved on (i.e. tweaking), as people have more time to work on performance instead of trying to get all the functionality working, people gain more expertise, and more people contribute.

This I agree with 100%. Showing people it will save them time, making them more productive, being able to do more in fewer LOC (and readable LOC!), while still not compromising on performance (doing some in a few lines of code doesn’t really save you time, if it takes a day to run, instead of a few minutes), is the best way to sell the language to newcomers.

1 Like

I don’t mean that’s always possible, but also a language like Python often regarded as not exceptionally fast gained good performance with some tweaking of a compiler (Numba) :wink: In addition, a program written in a given language can run with different speed (and sometimes the difference is non-negligible) when compiled with different compilers (consider Fortran with gfortran, and the optimized ifort for Intel-powered machines). The speed is related to the features of the language itself, but is also tightly related to the actual compiler you use which may not be unique (in this sense it’s not specific to the language) and can be more or less optimized.

Even though I came to Julia largely for the performance, the things that have kept me closely adhering to it are not just the performance, but the combination of performance, metaprogramming and multiple dispatch in one language. Multiple dispatch is a beautiful paradigm for a programming language.

When I write code, I often write it backwards. I write complicated_function(x), less_complicated_function(x), less_less_complicated_function(x), each of which is (usually, if I’m doing things right) just a few lines of code. This might sound silly, but there was a conceptual barrier to doing this for me in OO programming: every time I wrote a new function, I’d have to decide where to put it (i.e. whether it belongs to some object). If I didn’t think about this at all, I’d pay the price later. I didn’t realize how big a deal this was for me psychologically until I started writing Julia. I realize now that almost none of those functions were supposed to belong to objects. There is something about all of these functions having the same name that makes so much sense. There is something about them belonging only to a module (a namespace for most intents and purposes) that makes so much sense. I am simply so much more comfortable writing code this way. I don’t ever want to go back to the Python/Java/C++ way of doing things. It didn’t make sense.

I was extremely dubious about Python from day 1 so I came to Julia specifically looking to get away from it. Python is a scripting language. It’s really good for things like replacing bash and configuring neovim, for scientific computing it never really made much sense. Despite its problems, I have much deeper respect for C++, so if I had come to Juia from C++ I probably would have been more skeptical, and might have taken longer to fully appreciate it.

To echo @Tamas_Papp’s very good point: in my experience there are 2 kinds of people in this world: those who are open-minded about new languages and those who will NEVER EVER USE ANYTHING ELSE. A huge chunk of that latter group are Python programmers. We shouldn’t waste too much time trying to appeal to this latter group. Instead, we should work on Julia interoperability so that, 10 years from now, everyone will wake up to find that their Python code is largely running Julia and they’ll say “Wait a sec, why am I using Python again? To hell with this.”


An explanation that made sense to me around the time I seriously started switching over most of my work from Python and Matlab was given by @StefanKarpinski in a comment on the Mad (Data) Scientiest blog here, laying out what he calls the PyPy problem. Do read the whole comment, but the key idea is this:

The PyPy problem is a catch-22 that faces projects like PyPy in language ecosystems that include a lot of libraries implemented in C. To make the language drastically faster, you have to change the internals significantly, but since the internals are what many libraries interface with, they are effectively part of the language and cannot be changed very much without breaking those libraries. In other words, because all the performance critical code in Python has to be written in C, you can’t change the C API without breaking all the performance-critical libraries. But without changing the C API, you can’t change the internals very much, so you can’t make the language go much faster.


Aren’t things like Numba, Cython, and PyPy more like Python dialects?
To me, that’s just creating another (but less extreme) version of the two language problem.
That can be a useful path, for people who are ready have a lot of Python experience, who aren’t willing to learn a completely new language, but for newcomers, I don’t think is a good situation.


If I have to write a loop with a few million iterations of a non-trivial task, here are two ways to go about doing it:

  • Test the loop. If it is too slow, and you are not already an expert on numpy, pandas and scipy (and have an eidetic memory), spend 3 hours digging through numpy, pandas and scipy documentation to figure out whether a high-performance implementation of the loop already exists. If it doesn’t, or if it’s still too slow, make sure it’s possible to get your stuff into Cython or Numba code, this pretty much means you have to abandon all the Python objects that you spent all that time writing and adding functionality to at the door because now you are just writing C code with fancy syntax.

  • Just write a loop.

Julia users mostly advocate this latter method.


This seems an exagerated explanation until you face this situation several times in python. That is what actually happened to me after buying a cython book, reading it, and realizing that what I wanted to do was not feasible to do in a “Pythonic” way.

In my case, i wanted to speed up a code that was using a dictionary. It turns out that cython cannot have typed dictionaries (it uses the standard dict from python). Something as silly as counting words in python becomes much slower than in julia. In Julia the code counts[word] = counts[word] + 1 can be fast because the language allows us to tell the compiler that counts[word] will be always an int (defining counts as Dict{string, Int}). The python interpreter has to waste time every iteration guessing what is going on in this addition.

I really like the ideas behind python. The community and the zen of python. I ended up with the contradicting fact that If I want to follow

Beautiful is better than ugly.
Explicit is better than implicit.
Simple is better than complex.
Complex is better than complicated.
Flat is better than nested.
Sparse is better than dense.
Readability counts.

I should write julia code.


I encountered this situation almost immediately the very first time I ever used Python and I’ve been a Python skeptic ever since.


I think you misunderstood what I said. I did not mean to imply that programmers who are currently not able to appreciate Julia never will, and that they should be ignored. I merely said people look for solutions when they learn more deeply about the problems, which takes time.

Also, I don’t think we should disparage Python or Python programmers. Julia has a lot of promise, but at the same time the core language and the library ecosystem need time to mature. Waiting a bit more with adoption is a reasonable decision in many cases.


Sorry, I also don’t mean to disparage Python programmers. I find the situation the world finds itself in with regard to Python to be silly and frustrating, and anecdotally I have found that people who are deeply invested in Python are much more resistant to change than people who are deeply invested in, for example C++ or Java (an anecdote which is by no means a general fact), but of course there’s lots of smart people doing great work with Python. Just today I was reading through some of the IPython tutorials that LIGO has published to help understand their results. And, in the interest of full-disclosure, I use scikit-learn all the time, and it’s just a wrapper to the Python library.

Also, my (at times, admittedly somewhat emotional) aversion to Python right now is largely motivated by fear that for professional reasons I will be forced to spend enormous amounts of time using it again in the future, even though I don’t like it for all the legitimate reasons I’ve explained. My defenses go way up when I feel like something is, or is about to be imposed on me.

1 Like

:100: Jupyter and many topical scientific python-related communities have done amazing work pushing forward open science and collaborative development. We are indebted to this work both in terms of many pieces of infrastructure we use directly, and for the broadened mindshare that gives Julia users –
especially grad students and junior faculty – more room to push for open-sourcing code, and more venues in which to publish. (largely applies to R as well)

There are a number of interesting insights in this thread, but everyone: please stay vigilant about the line between constructive griping and unnecessary flame-baiting.