No, every new version intruduces new features, so 1.9 code is not guaranteed to work on 1.8, but only on 1.10, 1.11 etc.
I feel like the issue here is the use of compiler internals. Theyāre called internals for a reason, and thatās a big reason Zygote is a broken, buggy mess. Either we need to create a clear, public interface that wonāt break from update to update and is enough for Enzyme/Zygote/Diffractor to rely on (is that possible?); or we take the Python approach of ālet the compiler leak everywhere and freeze it in placeā; or we just admit these packages were a bad idea and cut our losses. We donāt have the big tech money weād need to keep making substantial fixes to these packages with every minor update. If nothing else, we canāt accept a bus factor that smallāI think weāve seen exactly that situation with Zygote having nobody to work on it after Mike Innes left for Relational AI.
True! I certainly overstated there.
Yeah, thatās probably true for some of the specific cases here. But it still likely caused a compatibility issue for many folks trying to setup environments in python 3.5, and I doubt anyone called out the python developers for that.
I do still feel like a lot these are package issues, not language issues, and the two are being conflated.
How long would it take Julia to include all the necessary mechanics for those libraries?
No, your package would not break, because you would then in your pyproject.toml
update the Requires-Python
entry to Requires-Python: >=3.6
.
Letās say you did this in your package version N
, and in N-1
it was >=3.0
.
People in an environment with a python 3.5 would only be offered up to N-1
for installation. People with newer Python would have up to N
available.
If you want, in an env with a Python 3.5 interpreter, want to install version N
(e.g. from some description file analogous to Manifest.toml
), the package manager would balk, and offer you more or less useful debugging info which compat bounds cause the conflict.
It would not install the package and cause you to discover the problem later.
So, I donāt know what your below statement is grounded in?
But it still likely caused a compatibility issue for many folks trying to setup environments in python 3.5, and I doubt anyone called out the python developers for that.
If you are not using the packaging metadata correctly (e.g. by using 3.6+ features, but not giving the correct Requires-Python
entry) then thereās not much the package manager can do, and I guess this is true for Python as well as Julia?
This is my larger point. The sense I get is that a versioning/metadata issue like this in Python would be blamed on the package, but in Julia often gets blamed on the language. Part of it I think, is that Pythonās mega packages (NumPy, SciPy, etc.) take up one dependency, whereas in Julia there are many (dozens? more?) dependencies for the same functionality, so more versioning issues exist simply because more packages are necessary?
edit: the idea of better tooling here for updating lower/upper bounds seems to be a good one.
Turing.jl was built off of <$100kā¦ I canāt even imagine what I could do with $5M. Thatās more than Stan got for sure.
I have no idea and I think thatās a major part of the problem. Only a very small number of Julia devs (2-4 maybe) know what kind of deep internals Zygote relies on.
@mihlybaci what you are getting at is spot on, Julia does take non-breaking releases and semver more seriously.
I wrote my first package for Julia something like 5 years ago and have barely changed it since. Its heavy on macro use and touched some internals to return the right line number nodes.
Some of those internals changed around 2020, and the tiny PR to keep it working on all julia versions was submitted by a core developer when they caught it in testing.
Iām not sure if that happens in other languages, but it seems unlikely. Who else even tests every package in the registry with every release?
Enzyme really doesnāt have a big need for Julia compiler features. We still support Julia 1.6 (and beyond).
The excecption to that is that Iāve found bugs in Juliaās garbage collector that Iāve upstreamed to Julia which are only available on the later versions. What we do there is we work around them with a slower fallback if we were to generate them and use the faster ones otherwise (see https://github.com/EnzymeAD/Enzyme.jl/blob/daa30d68bd95fa6bc635decf6205b18225c24d51/src/compiler.jl#L6534 here for example).
Of course, we still can take advantage of new features in later julias for better error bactraces/etc, but itās not a requirement for Enzyme to work.
In contrast, I think this was a much bigger issue for Diffractor.jl, which had a requirement for near bleeding edge julia for the infrastructure required to build it.
As someone working in this space.
Such a version doesnāt exist and cannot exist.
Because
- There will be a new ideas for how to do AD better that will require support in the compiler to make possible.
- The language will add a new feature that will need support from the AD engine.
Now of course that doesnāt mean you canāt freeze both your AD and yourjulia version and have something that keeps on working as well as it ever did. But thatās also true today, like around the time of any stable julia release, give or take a few months.
You donāt even need the LTS for that
This, is true.
If there are bugs in the compiler of the LTS release Iām surprised the bugfixes are not backported. Do they require new features?
On the main topic, I agree with @oxinabox, I donāt think we can envision that things are going to slow down in the near future. I suspect the next LTS wonāt have static compilation, wonāt have everything the AD engines need, wonāt have finished moving the stdlibs out into upgradable packages, will have lots of error messages and documentation that could be improved, etc. In other words, there will continue to be big, exciting and/or important changes made for as long as one can foresee. Why would you deny users the benefits of timely improvements?
@aplavin, I have become more convinced of your point, not through my own experience but from the cases where youāve given a concrete, reproducible example. Iād strongly encourage you to keep better track of these examples when you encounter them, because documenting these failures and reporting them as issues to the package maintainers is one of the few routes towards fixing this unfortunate situation.
Iām also excited about Add public keyword by LilithHafner Ā· Pull Request #50105 Ā· JuliaLang/julia Ā· GitHub (as noted by @ufechner7); I think that, plus a good Aqua test for packages once that merges, might help a lot.
Backporting requires manual intervention (sometimes, a lot of it); we basically have one amazing but overstretched backport fairy (Kristoffer Carlson) for the entire language. At this point, itās pretty selective what gets backported to 1.6.
For sake of understanding, what are you thinking of by āwonāt have everything the AD engines needā?
While definitely, I can attest to benefits of new features in later Juliaās making AD easier, I can only think of Diffractor which requires bleeding edge features?
Presumably Tim more meant āwantā instead of āneedā.i.e. people developing AD engines will continue to run into things with Juliaās runtime that are either inconvenient or blocking certain features for them, and they will want upstream changes to juliaās internals.
Sorry, indeed I was talking about Diffractor. It is indeed my impression that Enzyme is very standalone and works beautifully with current Julia versions.
I personally donāt see how making fewer releases will solve anything in isolation. That just means that each release will be bigger and increase the likelihood of issues sneaking through the quality checks.
If anything, more frequent smaller releases would probably be better when it comes to trying to reduce the level of accidental breakage of packages.
So the focus should probably be on what actually ends up getting merged to master, not the release cadence.
I donāt think so. And I would want to go the other way, have more frequent releases.
I donāt see much of an ecosystem quality problem, relating to new Julia versions (I think others agree here, except for a minor update problem, and I proposed how to fix that):
I obviously donāt control the frequency of releases, but if each one is trying to be a smaller upgrade (maybe not trying to be very small) then they can be more frequent, and less of a worry to miss the feature freeze. People might argue itās too much work, but we have up to 6 point releases, and with more 1.x releases, I think we need fewer if any 1.x.y, so less backporting work.
[Julia is the first language Iām involved with contributing to. Iām not sure, does Julia have more regressions than others? I have no way to back that up, could actually be fewer. Do people think too many/more than expected? And Iām thinking of genuine ones, fixes (or needing) in point releases, not talking about stuff an upgrade of packages fixes.]
I wonder if thereās anything actually actionable here, for example from the PoV of package developers. How can they fix the breakage after the fact?