Well, I thought it was good as I learned from it thing or two.
What do you say about the LLVM example?
Itâs wrong. He has an error in his code and is missing a return
statement, so the generated assembly is gibberish. Fix that, and youâll see that the assembly for add3
still allocates two vectors (and not just the one he hopes for).
OhâŚ
I had hopes :-).
You should point it out in StackOverflow.
As I thought he found gold.
I love python (it get things done) and I am starting to love Julia (I think my head deals better with functional programming than with OOP).
Regarding Juliaâ motivation (title of this thread), was Lua not fast enough?
Luaâs fast enough, but it doesnât have the same generic programming constructs that Julia users make use of so it cannot really replace modern Julia.
LuaJIT is also not the standard Lua implementation, this is and they are separate projects. LuaJIT existed in 2009, but was a fairly niche project and only supported 32-bit x86, as far as Iâm aware. It now supports a number of other architectures and word sizes, but I think thatâs a pretty recent development.
Edit: the sponsorship page is a pretty good way to see when ports happened since many of the sponsorships seem to be for porting LuaJIT to various platforms. It seems that the work on the first non-x86-32 port for LuaJIT started in December 2009 and wasnât completed until 2010 at the earliest (since they got the largest chunk of sponsorship from Google in January that year).
LuaJIT is looking for maintainers. Could be in trouble?
Also, howâs the speed of the standard Lua? The benchmarks do not include that, do they?
Yeah I meant LuaJIT (but for reasons mentioned below, Lua and LuaJIT arenât necessarily the same language )
And there were a lot of GC issues that one can point to that seem to be part of the architecture and design which limit the full use of heap-allocated arrays (which ended up being one of the downfalls of LuaJIT). They werenât actually fixed until LuaJIT 3.0 in 2017
http://wiki.luajit.org/New-Garbage-Collector
But LuaJIT 3.0 is very much not backwards compatible with the previous versions and some of the disputes coincided with the main developer leaving.
Mike Pall left in 2015.
Thereâs enough that LuaJIT 3.0 was able to make it release. Some of the issues were that Lua had its own releases and IIRC LuaJIT is not and somewhat cannot be compatible with the newest Lua, which is some of the 5.3 vs 5.2 vs 5.1 discussions youâll find on some of the old threads about the departure.
(Honestly, these kinds of issues are big reason to say that the JIT needs to be part of the languageâs design)
I donât think thereâs much of a reason to benchmark pure Lua since you can think of it as âdictionary-oriented programmingâ, with the kind of speed youâd expect from that. But, the fact that a tracing JIT could get that sped up the way it did was truly a remarkable feat which showed the power of JITs long before Julia was a big popular project.
(I think it would be awesome to use Cassette to build a tracing JIT into Julia code in a way that acts like LuaJIT and see what happensâŚ)
I had to read more. Here is a link if others are interested: Tracing just-in-time compilation - Wikipedia
Would this tracing JIT enable faster than C implementations in Julia? I understand this quickly go some very deep compiler optimization discussion, but in principle can this method make Julia code faster than C?
Itâs already possible to write Julia code thatâs faster than C, e.g. by using SIMD and using the fact that we specialize on function arguments. I wouldnât hold out too much hope for tracing JIT tricks being that much faster than well-written C that already leverages SIMD and doesnât have higher-order functions. People have been hoping for that for a long time and it hasnât really materialized outside of some particular cases here and there. Otherwise, youâd see Java beating C speed all the time, which is not a thing that happens much.
What @StefanKarpinski said is basically the answer. People tried a lot of tracing JITs before Julia and LuaJIT was the one that did well and basically can match C, with others like those for Javascript getting within 5x. However, I would be more optimistic than Stefan because tracing JITs have done some things really well and other things less well. Slapping one on top of some Julia code might be a cool way to mix them and get the best of both worlds. Of course, I wouldnât expect well-written type-stable SIMD code to do better, but I think you could get some massive improvements in things like type-unstable code or things with arrays that arenât strictly typed, and there may be some areas where a tracer kicks in and skips the end of the computation by finding out an analytical solution that may not have been available at compile time.
It would be quite a research project to find out how to make this useful though, and probably contained to a few tricks.
This sounds a bit like a mix of the proverbial âsufficiently clever compilerâ and something that is able to discover that some Vector{Any}
actually only ever holds Float64
values and so could be specialized Vector{Float64}
. Which sounds like more of a use case for a tracing linter than a tracing JIT. I donât believe in sufficiently clever compilers but a tracing linter seems like it would be quite useful to have.
Indeed. I didnât think about a tracing linter though: âdid you mean Vector{eltype(eltype(A.x))}
instead of Vector{Any}
?â would be quite useful for concrete typing in generic programming.
Tracing linter especially if it is implemented in the same style than femtocleaner, i.e. it checks automatically Julia projects and makes pull requests to improve code performance.
To add my 2 cents to this discussion and some feedback about my recent experience of Julia vs Scipy/Numpy:
I used to use a lot Julia 0.4/0.5/0.6 as Numpy/Scipy were too slow for some of my applications, ~ 2 years ago. I had a great time, really, and firmly believed in Juliaâs bright future. In particular, I think Julia is very nice when it comes down to matrix/tensor manipulation. However, I feel Numpy/Scipy gained in speed, such that I usually donât see much difference nowadays between using Python or Julia.
Now, the dark side. In my opinion, we currently face a problem of numerous packages not working at all in Julia 0.7/1.0, which gives a terrible user experience. Some of my users stopped using Julia because of that, and because of many problem of Juliaâs installation. On my side, since the past 6 months, I admit that I automatically go to Python for developing new code, because solving problems between Julia 0.6-0.7/1.0 have become a nightmareâŚ
Iâm a bit sorry about that, but I think Julia may loose new users because of the problem of many packages not working in 0.7/1.0 yet⌠And not some of the least. For instance, I canât get Mamba, NMF, or even Compose to work from the registered stream, I needed to install them directly from the repo. A new user may not necessarily realize that, driving him away from Julia, and toward Scipy/Numpy and the Python ecosystem which now is very stable and easy to use/install.
On my side, since the past 6 months, I admit that I automatically go to Python for developing new code, because solving problems between Julia 0.6-0.7/1.0 have become a nightmareâŚ
That makes no sense. Julia has been released a month ago, why have you upgraded 6 months ago where it was crystal clear that almost any package was not working with 0.7 / 1.0?
Maybe you could look a little bit into history
- when has Python 3 been released?
- when has Numpy/Scipy released a version that is compatible with Python 3?
I am pretty certain that the delta is larger than one month
This is just to make clear that the package transition from 0.6 to 1.0 was an huge amount work. I have ported three packages (Gtk, Winston, NFFT) and have helped fixing things in HDF5 and Cairo. If you are seeing packages not working, the best thing is either to stay on 0.6 (which is rock stable) or help porting things to 1.0.
I could not resist doing this
- Numpy 1.5.0 appears to be the first release supporting Python 3.
- It was released on 2010-08-31.
- Python 3 was released on 2008-12-03.
Someone please correct these if I misunderstood something.
Thanks. Just to make this clear: I think that the Numpy developers did a great job when upgrading from Python 2 to Python 3 and the transition went smoothly for me. But: I simply used Python 2 until there was the official release of Numpy that worked with Python 3.
Adding to the list: sagemath has no python 3 compatible release yet.
But yeah, 0.6 is imho still the more reliable version, but pointing new users there doesnât feel right either.
At this time I tend to agree. It seems that all âimportantâ packages have been migrated. But this is pretty subjective.