Why is LoopVectorization deprecated?

That is not a solution for Miles or his package users anyways, since then the users would be implicitly version locked to 1.10? I know from my self, that I have not in the past not upgraded Julia to stick to one package, the only time I remember actually considering not to upgrade, is in this case for the upcoming 1.11, since my work at the time would be meaningless without LoopVectorization.jl - it would become too slow to be practical.

I think there are a lot of valid points floating around, such as to get somewhere better one has to forego the best solution currently, but it is a bit scary for the eco-system as a whole when one package is deprecated and performance tanks in multiple different packages from all kind of projects.

I think the stress would be a lot lower, if someone could show-case how to get near similar level of performance as LoopVectorization without using it - but I have not seen anyone do that yet.


I’m not sure how general this is, but here is one example of “manual” loop vectorization:

Probably, for the time being, people wanting to recover the performance of LV should study how to do something of the sort in their own problems.


There are some discussions on GPU in this thread. I would say that there are some tasks that require low latency such that the latency of transferring the data to the GPU is not worth the extra compute, but is still branchless, suitable for SIMD. That’s where loopvectorization shines.


^ This. I am not sure how feasable it wold actually be, but given how many packages rely on LV for performance gains, and how important performance is to the Julia comunity, it seems like a replacement for LV would be a splendid candidate a new (upgradable) Julia standard library!


On the contrary, I suppose a proper solution would be integrated into the Julia compiler and/or LLVM? Which is exactly what LoopModels is supposed to facilitate, I think?


Too sad to learn about this. LoopVectorization.jl is THE package that blew my mind when I re-discovered Julia about 3 years ago. With LV, I could write super-readable code and get the same performance than manual simd-optimized code, which is crazy-good.

That’s why I’m generally skeptical of bleeding-edge Julia packages that make use of compiler internals nowadays. They are not really getting supported and maintaining them becomes an uphill battle.


As a solution, perhaps we should all learn more about SIMD and create some simple packages, tailored to some specific use cases of SIMD, that are also easier to maintain. In my case, LV was doing the magic for my work involving complex functions… for most other cases I encountered, @fastmath @inbounds @simd does the trick of getting nearly-optimal performance.