Julia motivation: why weren't Numpy, Scipy, Numba, good enough?

Well, I thought it was good as I learned from it thing or two.
What do you say about the LLVM example?

It’s wrong. He has an error in his code and is missing a return statement, so the generated assembly is gibberish. Fix that, and you’ll see that the assembly for add3 still allocates two vectors (and not just the one he hopes for).

4 Likes

Oh…
I had hopes :-).

You should point it out in StackOverflow.
As I thought he found gold.

I love python (it get things done) and I am starting to love Julia (I think my head deals better with functional programming than with OOP).

Regarding Julia’ motivation (title of this thread), was Lua not fast enough?

Lua’s fast enough, but it doesn’t have the same generic programming constructs that Julia users make use of so it cannot really replace modern Julia.

2 Likes

LuaJIT is also not the standard Lua implementation, this is and they are separate projects. LuaJIT existed in 2009, but was a fairly niche project and only supported 32-bit x86, as far as I’m aware. It now supports a number of other architectures and word sizes, but I think that’s a pretty recent development.

Edit: the sponsorship page is a pretty good way to see when ports happened since many of the sponsorships seem to be for porting LuaJIT to various platforms. It seems that the work on the first non-x86-32 port for LuaJIT started in December 2009 and wasn’t completed until 2010 at the earliest (since they got the largest chunk of sponsorship from Google in January that year).

LuaJIT is looking for maintainers. Could be in trouble?

Also, how’s the speed of the standard Lua? The benchmarks do not include that, do they?

Yeah I meant LuaJIT (but for reasons mentioned below, Lua and LuaJIT aren’t necessarily the same language :man_shrugging:)

And there were a lot of GC issues that one can point to that seem to be part of the architecture and design which limit the full use of heap-allocated arrays (which ended up being one of the downfalls of LuaJIT). They weren’t actually fixed until LuaJIT 3.0 in 2017

http://wiki.luajit.org/New-Garbage-Collector

But LuaJIT 3.0 is very much not backwards compatible with the previous versions and some of the disputes coincided with the main developer leaving.

Mike Pall left in 2015.

There’s enough that LuaJIT 3.0 was able to make it release. Some of the issues were that Lua had its own releases and IIRC LuaJIT is not and somewhat cannot be compatible with the newest Lua, which is some of the 5.3 vs 5.2 vs 5.1 discussions you’ll find on some of the old threads about the departure.

https://www.reddit.com/r/programming/comments/3gin5e/luajits_main_developer_is_retiring_and_is_looking/

(Honestly, these kinds of issues are big reason to say that the JIT needs to be part of the language’s design)

I don’t think there’s much of a reason to benchmark pure Lua since you can think of it as “dictionary-oriented programming”, with the kind of speed you’d expect from that. But, the fact that a tracing JIT could get that sped up the way it did was truly a remarkable feat which showed the power of JITs long before Julia was a big popular project.

(I think it would be awesome to use Cassette to build a tracing JIT into Julia code in a way that acts like LuaJIT and see what happens…)

2 Likes

I had to read more. Here is a link if others are interested: Tracing just-in-time compilation - Wikipedia

Would this tracing JIT enable faster than C implementations in Julia? I understand this quickly go some very deep compiler optimization discussion, but in principle can this method make Julia code faster than C?

It’s already possible to write Julia code that’s faster than C, e.g. by using SIMD and using the fact that we specialize on function arguments. I wouldn’t hold out too much hope for tracing JIT tricks being that much faster than well-written C that already leverages SIMD and doesn’t have higher-order functions. People have been hoping for that for a long time and it hasn’t really materialized outside of some particular cases here and there. Otherwise, you’d see Java beating C speed all the time, which is not a thing that happens much.

6 Likes

What @StefanKarpinski said is basically the answer. People tried a lot of tracing JITs before Julia and LuaJIT was the one that did well and basically can match C, with others like those for Javascript getting within 5x. However, I would be more optimistic than Stefan because tracing JITs have done some things really well and other things less well. Slapping one on top of some Julia code might be a cool way to mix them and get the best of both worlds. Of course, I wouldn’t expect well-written type-stable SIMD code to do better, but I think you could get some massive improvements in things like type-unstable code or things with arrays that aren’t strictly typed, and there may be some areas where a tracer kicks in and skips the end of the computation by finding out an analytical solution that may not have been available at compile time.

It would be quite a research project to find out how to make this useful though, and probably contained to a few tricks.

1 Like

This sounds a bit like a mix of the proverbial “sufficiently clever compiler” and something that is able to discover that some Vector{Any} actually only ever holds Float64 values and so could be specialized Vector{Float64}. Which sounds like more of a use case for a tracing linter than a tracing JIT. I don’t believe in sufficiently clever compilers but a tracing linter seems like it would be quite useful to have.

5 Likes

Indeed. I didn’t think about a tracing linter though: “did you mean Vector{eltype(eltype(A.x))} instead of Vector{Any}?” would be quite useful for concrete typing in generic programming.

6 Likes

Tracing linter especially if it is implemented in the same style than femtocleaner, i.e. it checks automatically Julia projects and makes pull requests to improve code performance.

2 Likes

To add my 2 cents to this discussion and some feedback about my recent experience of Julia vs Scipy/Numpy:

I used to use a lot Julia 0.4/0.5/0.6 as Numpy/Scipy were too slow for some of my applications, ~ 2 years ago. I had a great time, really, and firmly believed in Julia’s bright future. In particular, I think Julia is very nice when it comes down to matrix/tensor manipulation. However, I feel Numpy/Scipy gained in speed, such that I usually don’t see much difference nowadays between using Python or Julia.

Now, the dark side. In my opinion, we currently face a problem of numerous packages not working at all in Julia 0.7/1.0, which gives a terrible user experience. Some of my users stopped using Julia because of that, and because of many problem of Julia’s installation. On my side, since the past 6 months, I admit that I automatically go to Python for developing new code, because solving problems between Julia 0.6-0.7/1.0 have become a nightmare…

I’m a bit sorry about that, but I think Julia may loose new users because of the problem of many packages not working in 0.7/1.0 yet… And not some of the least. For instance, I can’t get Mamba, NMF, or even Compose to work from the registered stream, I needed to install them directly from the repo. A new user may not necessarily realize that, driving him away from Julia, and toward Scipy/Numpy and the Python ecosystem which now is very stable and easy to use/install.

3 Likes

On my side, since the past 6 months, I admit that I automatically go to Python for developing new code, because solving problems between Julia 0.6-0.7/1.0 have become a nightmare…

That makes no sense. Julia has been released a month ago, why have you upgraded 6 months ago where it was crystal clear that almost any package was not working with 0.7 / 1.0?

Maybe you could look a little bit into history

  • when has Python 3 been released?
  • when has Numpy/Scipy released a version that is compatible with Python 3?

I am pretty certain that the delta is larger than one month :wink:

This is just to make clear that the package transition from 0.6 to 1.0 was an huge amount work. I have ported three packages (Gtk, Winston, NFFT) and have helped fixing things in HDF5 and Cairo. If you are seeing packages not working, the best thing is either to stay on 0.6 (which is rock stable) or help porting things to 1.0.

3 Likes

I could not resist doing this :wink:

  1. Numpy 1.5.0 appears to be the first release supporting Python 3.
  2. It was released on 2010-08-31.
  3. Python 3 was released on 2008-12-03.

Someone please correct these if I misunderstood something.

4 Likes

Thanks. Just to make this clear: I think that the Numpy developers did a great job when upgrading from Python 2 to Python 3 and the transition went smoothly for me. But: I simply used Python 2 until there was the official release of Numpy that worked with Python 3.

Adding to the list: sagemath has no python 3 compatible release yet.

But yeah, 0.6 is imho still the more reliable version, but pointing new users there doesn’t feel right either.

2 Likes

At this time I tend to agree. It seems that all “important” packages have been migrated. But this is pretty subjective.

1 Like