In the README file of JuliaSIMD/LoopVectorization.jl: Macro(s) for vectorizing loops. (github.com) it is clearly stated that the package is deprecated for Julia 1.11 and newer versions, without any explanation. Anyone knows what’s going on here?
See the discussion in Deprecate LV for Julia >= 1.11-DEV by chriselrod · Pull Request #519 · JuliaSIMD/LoopVectorization.jl · GitHub … it looks like most of the developers’ efforts are now focused on the successor project GitHub - LoopModels/LoopModels: "Full speed or nothing." - James Hetfield
But the last human commit to LoopModels is 7 months ago… Looks like a somewhat dangerous situation to me…
LoopModels/Math has more recent commits.
I’d be interested in a laymen-accessible explanation of the move from LoopVectorization to LoopModels in an easily discoverable place (so not something that can get buried in discourse). What I gather so far is the optimized code generation is going from Julia metaprogramming to LLVM. I don’t know what advantages that brings, but it seems natural to essentially expand the compiler in LLVM.
LoopModels seems to be more a C++ project than a Julia project, and currently we don’t even have a Julia interface to it. The project is still under rapid development (the main branch is not active but most commits are in the opt branch).
Reading between the lines not so much, there seems to have been a mild disagreement between the core compiler developers and Chris Elrod. I guess people don’t like when you mess with their internal interfaces. The whole setup inevitably causes code churn and therefore frustration. Now it’s code churn for the users. I wish there had been some kind of pragmatic path forward. At any rate, implementing at the LLVM level will be more stable long term, but it sure would be nice to have a stopgap. However, I guess it is up to us end users to try and figure out some kind of alternative @gofast
, and there’s a bit of time to do so, so we can take it in our stride.
See the Zulip thread here:
No commits have been pushed to the master
branch for the last 5 months but development is still going on in other branches. You can look at the opt
branch in the LoopModels repository.
Why are LoopModels being written entirely in C++? Julia not good enough?
The reason for it being written in C++ is that vectorization is a compiler pass that you want to do after pretty much all of the other optimizations, which pretty much requires it to be an LLVM pass (if you want to avoid some of the issues LoopVectorization had wrt compile time).
Beyond what @Oscar_Smith pointed out, Chris has discussed in detail in multiple places that yes, he does not think julia is as good for writing high performance libraries as C++.
@elrod, is that true? I recall some specific issues with Julia, stack allocation, alloca missing, but that is very specific, and not good to use generally.
I don’t think C++ (or C) is better generally, for speed (at least single-threaded), and can’t think of any other advantage (plus GC as a difference). I think it has to do with as Oscar mentioned, needing to be an LLVM pass.
Even C++ standard library doesn’t have it, but he favors a hybrid stack/heap allocation only in LLVM, built on alloca, because you do not want to arbitrarily large on the stack.
can’t think of any other advantage
less likely to have random huge regressions, for one. c++ performance is definitely more predictable, even if peak performance potential is comparable
Its not a question of C++ vs julia but rather at what compiler stage you want to run the thing.
Afaiu LoopVectorization.jl runs on expressions. Code operating on julia expressions is most conveniently written in julia (the lisp heritage shows).
LoopModels will instead operate on LLVM IR, i.e. a much later compiler stage. Code operating on LLVM datastructures is most conveniently written in C++, basically because llvm C-bindings and anything/C++ interop suck, and llvm itself is a C++ project.
From a technical viewpoint, I’d naively think that julia SSAIR looks much more suited (an intermediate representation between expression and llvm). I would guess that here stability / API-politics play a big role: Julia core/compiler does not want to offer stability guarantees on SSAIR, in order to not restrict future progress; so projects using that need to either get upstreamed into julia core, or need to fight an unending battle to stay compatible. (same issue as e.g. linux kernel)
What do you mean? Recompilation? I think Julia can do without those inherently, just not practically yet.
no, I mean performance regressions in the same code
So why did LoopVectorizations work on Julia expressions and how did it work as well as it did? LoopModels working on LLVM IR sounds more straightforward and, if I’m understanding the purpose of LLVM correctly, sounds like it could be applied to more languages than Julia.
Yes, LoopModels will be able to support languages other than Julia AFAIK.
But I think the same would hold for LoopModels, too, except with Julia replaced by LLVM, given that the LLVM IR isn’t stable across LLVM releases?
It worked as well as it did because it got some of the important parts very correct (cost modeling codegen), but had some fairly significant limitations based on being so early in the pipeline. For example, it isn’t able to deal with complex numbers or other user defined structs. It also has a significant impact on compile time since it tends to blow up the size of your code.
For me LoopVectorization.jl is really more about demonstrating what is possible rather than the long term approach to the issue of autovectorization in Julia. Autovectorization probably belongs deeper in the Julia compiler or perhaps as a compiler plugin.
The more interesting part of Julia SIMD for me is the explicit SIMD. Explicit in that the programmer is targeting specific LLVM intrinsics. This is exemplified by SIMD.jl and VectorizationBase.jl.
I also want to point out progress on SIMD that the Java ecosystem is making via Project Panama’s Vector API. This may be a source of inspiration for the future.
https://openjdk.org/jeps/438