Please be mindful of version bounds and semantic versioning when tagging your packages

I think you’re probably right. That’s why I said “To the extent anyone is offended…” I expect that extent is close to zero.

Not quite - I don’t have strong intuitions about which is harder to fix, as I haven’t really run into either situation many times (I’m boring, I mostly stick to well-maintained packages, other than my own). It’s more that not having upper bounds could lead to a situation where adding a package to a fresh environment might not work, if one of its dependencies made a release that broke something.

Requiring upper bounds should reduce the chance of this happening, at the expense of increasing the chance of blocking an upgrade. It’s a trade-off for sure, but I think the former situation (new install of a package that has passing tests but is broken because of a dependency update) is so bad from a QoL standpoint that it should be avoided as much as possible.

Sorry if I was the one to give this impression. I only meant to say that I’ve found your contributions here and on slack to be valuable. If you have a concern, I’m apt to give it more credence.

This is true. I think it’s worth separating different user types here though - for new/inexperienced users that lack the experience to deal with either broken installs or blocked upgrades are not the types of users that will need to be on the bleeding edge of rapidly changing packages. Again, the trade-off is that the folks that do need to be on the bleeding edge may need to shoulder greater hassle. You’re one of those people. But this seems preferable to a new user attempting to install DataFrames and having it be broken.

This may be a case of changing expectations, but it’s also the case that someone that needs to be on the bleeding edge and wants to deal with that type of potential headache is likely the type of person that knows how to use a different registry. That person can use LivingAtHead or whatever. I think the default registry that a new user has by default should be a registry where installed packages are guaranteed to work (for some definition of “guarantee”). If that means changing the expectations of General, that seems better than asking new users to add to their task list “Install Currated, whenever we get around to making it, until then stuff might be broken.”

4 Likes

Again, in principle, yes absolutely, but there are a few things that I have experienced about the reality of how things are in the Julia ecosystem that lead me to disagree with this sentiment

  • Most of the time, the breakage of the external API from an update, particularly one in the 0.x series, is pretty minimal, so even if it actually broke whatever the package is with the dependency, chances are the the issue is pretty minor.
  • Compatibility aside, updating Julia packages is far, far more likely to fix issues than cause new ones. Perhaps this is just a bug that was in the old package, perhaps it’s a new feature that was definitely needed to get something to work, whatever. The first thing I do if I have a problem with someone elses package is make sure it’s updated. I’m pretty confident I have never once found myself worse off from doing this with a Julia package.
  • Untangling dependency issues is no fun, especially if it’s several layers deep in the dependency tree. The tooling just isn’t there, correct me if I’m just not aware of it.

Thanks! I appreciate that :slightly_smiling_face:. I regret when I start a thread that seems to cause a lot of strife, but I really believe that we are all better off if everyone is at least worried about this issue.

I sympathize with this, but again, Julia is young. It seems hard to argue that people running on significantly outdated sets of packages are doing themselves any favors. One day it may make sense for somebody to go all CentOS mode and only use things that are 10 years old. In the Julia ecosystem right now, I suspect people trying to do that are only making their lives more difficult, whether they realize it or not.

Lastly, it’s not only users on the bleeding edge who sometimes need new features.

I think I’ve said just about all I can say about this issue. I’ll avoid spamming this already ridiculously long thread further with spurious reinforcement of my own opinion.

5 Likes

This is a quite long thread, and I learned a lot by reading the comments.

I just want to share appreciation for the new package system, the semver design, and required upper bounds. The design decisions that were made really save package developers from panic mode as described by @fredrikekre. They create a solid ecosystem far more solid than any ecosystem I’ve ever used.

Research doesn’t need to be synonymous of unstable software. We all like stable releases with controlled dependencies. I like when users of my packages signal to me that a dependency version can be bumped. It is actually nice as I don’t have time to be checking the latest releases of my dependencies all the time. Users can, and will help, if they find a blocker. Let them submit PRs with updated code for you.

16 Likes

Just to mention, I just used CompatHelper.jl for UnitfulAngles.jl and it helped! It PR:ed an increase to the compatibility bounds on Unitful.jl, I could see that all my tests passed, and merge. Boom. I might have missed something, but this is exactly the kind of automatic bounds adjustment I and others were mentioning before! That is awesome!

19 Likes

I may be naive on this point, but seems like the “correct” solution would be to allow multiple package versions to work side by side internally? I’m reminded of the “DLL hell” problem on Windows.

That way if old Package X only works with Package Y 0.25 and your code needs Package Y 0.26+, code in Package X will exclusively refer to the former and all other code the latter.

I’m assuming this would be a complicated change, if it’s even possible.

Edit: Though just thinking out loud and ignorantly: could Package Y 0.25 be rewritten in memory to be called e.g. Y_v0_25 and all references to module Y in Package X to Y_v0_25?

2 Likes

This is discussed a bit higher up in the thread.

In particular, note mauro3’s response:

3 Likes

I think one thing that’s being missed in this whole conversation is that there are only two stable states for a package ecosystem:

  1. No versions have upper bounds
  2. Almost all versions have upper bounds

Specifically, if older versions of packages have higher upper bounds—or no upper bounds—then things are going to be very painful. That’s precisely what is causing problems right now. That’s because we’re in the process of transitioning from the former stable state to the latter stable state.

Why is being half way so problematic? When older versions of some packages have no upper bounds but the latest versions have upper bounds, it causes the resolver to go nuts and pick old broken versions that only appear viable because someone lied in the past by claiming that that version would work forever. It’s not the upper bounds that cause the problem, it’s the old lies about being compatible with everything forever that cause issues.

Different ecosystems have chosen between these two stable states in different ways. For example, Go chooses the former approach: upper bounds are not allowed except for new major releases, which are treated like entirely new packages. However that comes along with some serious discipline: Go packages are not allowed to make incompatible changes at all, except in major releases. Rust, on the other hand, takes the latter approach: all registered versions must have semantically plausible upper bounds.

So the question is, which model do we want for Julia? Frankly, I don’t think that flat out disallowing breaking changes is going to fly in our community, although I’d be willing to try. That means if you make a release that breaks your dependers, we won’t let you register it except as a new major release. Doesn’t seem pleasant or feasible to me. I just don’t think the Julia community would accept that kind of strict policy.

Assuming that we allow people to make releases as they wish and react when things break by putting upper bounds even if they aren’t always at major release numbers (rather than forcing people to retract those releases), that implies that we’re going to have some upper bounds. Which in turn implies that we’re in the latter camp, with Rust (excellent company, btw) and we need upper bounds on most versions.

The most straightforward way to make sure that older versions have upper bounds is to require plausible upper bounds at registration time and adjust as appropriate. Why default to capping at the next semantically breaking version? Because that’s the best guess we can make in advance. If all I know is that I’m fine with using version 1.2.3 of a dependency and that 1.2.4 has just been released, I’d be surprised (and annoyed) if I can’t use that too. I would also expect 1.3 to probably work with no or minimal changes (knowing nothing else). If someone releases 2.0 on the other hand, I’d be surprised if something didn’t break. Pleasantly surprised, but surprised nonetheless. And then my next question would be “if nothing broke, was it really necessary to make a major release and cause all this extra work?” And then I’d go check out the release notes and see what changed. And if it was one or two obscure but technically breaking changes, I’d be irritated.

Will using semantic breaking as a guess be perfect? Of course not. Some point releases will actually break things and we’ll have to add caps after the fact (or yank them if it’s bad enough). Some minor releases will introduce new exports which collide with some other export in a package that uses using instead of explicit imports and that will need a cap even though the change is allowable. Some major versions will get released and not break as much as expected. Lucky us! We can bump the bounds. I don’t really get where the panic is coming from. We’ve been dealing with this kind of thing for years and it’s been fine. As Tamas has said, if there are problems, we’ll do what needs to be done to make things work well. God knowns we go to serious lengths to make things work.

I also think it’s interesting and worth noting that the people who have been thinking about and working on package management, registries, versioning and CI for the past however many years (about seven in my case) are all roughly in agreement about how this needs to work. Meanwhile those who are arguing against that approach seem to have just become aware of the issue this week and be extrapolating doom based on problems that we’ve known we were going to have to deal with at some point for over a year now when we transition from the “no bounds yolo” model to the more mature “plausible bounds” model.

32 Likes

My understanding was that the prior suggestion was about package developers doing all this manually to indicate a breaking change, instead of some automated process to resolve dependency conflicts only when necessary?

It’s true that it would make using types tricky, if the primary dependency exposes objects from the older version, but 1) It would allow things work instead of breaking and 2) if the user did encounter that, writing a conversion function shouldn’t be too hard?

It’s not doom I am extrapolating, for me personally it just has been more efficient to develop packages with no upper bounds unless they are needed.

What I am worried about is that I need to change my personal workflow to satisfy other people strict needs.

Overall, it seems like a good decision, but I’m just generally in a terrible mood and very independent and don’t like it when an outside entity imposes strict rules.

I think it is healthy to observe strict rules with a heavy dose of skepticism… especially since the proofs of Godel’s incompleteness suggests that no formal axiomatic system can ever be complete.

This is apparent.

Please understand that this discussion is mostly about how the community should manage version bounds to achieve a functioning and cooperative ecosystem of packages. If this goal is not important enough for you to make a small change to your workflow, then this whole topic may not be relevant for you.

You have made your position (that you don’t care about others unless they pay you) very clear multiple times in this topic, I am not sure that repeating it is adding much at this point. If you don’t want to cooperate with the ecosystem to any extent, that’s fine — but then there is little point in disrupting discussions for people who do.

6 Likes

You are incorrect, I very much care about other people and it makes me happy to help others for free, but making my service a strict requirement is a bit too far. I only help people for free when I feel they deserve it and when I have the extra capacity for it. Note that nobody in this topic wants or needs my help in the first place.

The reason I make a stand here is not because I actually expect payment, it is because I expect independence and to be sovereign.

This has led me to reconsider whether I actually want to make these releases or not. I think this reaction is quite relevant to the zeitgeist and topic.

Also, note that by writing this post about me, you are inviting me to write another post in response.

The main reason I commented here so much is because people made lots of posts specifically about me, which induces me to respond… I’d rather not.

Have you considered actually trying the new system? I was annoyed as well at first until I realised it actually creates less headaches in the long run.

If you don’t care about users, you should also not care if users are stuck on old dependencies. You can also remove the upper bounds on master.

9 Likes

I may not have made this clear enough, but I have not been arguing against the upper bounds for quite a while now. The reason I commented again is to respond to the wrong assumptions I am seeing about what my initial motivations are against it. This doesn’t mean that I am continuing to argue against it… just only clarifying misconceptions about my earlier stances.

I will try out the new system eventually, when I get bored of math stuff and need a seriously stable release, but I dont know when that will be.

2 Likes

I was really annoyed with it until I just gave in and tried it. I admit there are problems with packages getting stuck in old versions, but as long as we try to get the tooling right it seems like this is less of an issue considering how much easier this is to manage than any other package system out there (at least in my personal experience).

6 Likes

Here’s a real world example of why this is important. One might think something like FFTW.jl is always going to be safe and doesn’t need an upper bound, as the maintainers of DSP.jl did. Alas they were wrong: DSP.jl is suddenly broken

WARNING: both FFTW and Util export "Frequencies"; uses of it in module Periodograms must be qualified
ERROR: LoadError: LoadError: UndefVarError: Frequencies not defined
Stacktrace:
 [1] top-level scope at /Users/solver/.julia/packages/DSP/wwKNu/src/periodograms.jl:186
 [2] include at ./boot.jl:328 [inlined]
 [3] include_relative(::Module, ::String) at ./loading.jl:1094
 [4] include at ./Base.jl:31 [inlined]
 [5] include(::String) at /Users/solver/.julia/packages/DSP/wwKNu/src/DSP.jl:1
 [6] top-level scope at /Users/solver/.julia/packages/DSP/wwKNu/src/DSP.jl:13
 [7] include at ./boot.jl:328 [inlined]
 [8] include_relative(::Module, ::String) at ./loading.jl:1094
 [9] include(::Module, ::String) at ./Base.jl:31
 [10] top-level scope at none:2
 [11] eval at ./boot.jl:330 [inlined]
 [12] eval(::Expr) at ./client.jl:432
 [13] top-level scope at ./none:3
in expression starting at /Users/solver/.julia/packages/DSP/wwKNu/src/periodograms.jl:186
in expression starting at /Users/solver/.julia/packages/DSP/wwKNu/src/DSP.jl:13
ERROR: Failed to precompile DSP [717857b8-e6f2-59f4-9121-6e50c889abd2] to /Users/solver/.julia/compiled/v1.2/DSP/OtML7.ji.
Stacktrace:
 [1] error(::String) at ./error.jl:33
 [2] compilecache(::Base.PkgId, ::String) at ./loading.jl:1253
 [3] _require(::Base.PkgId) at ./loading.jl:1013
 [4] require(::Base.PkgId) at ./loading.jl:911
 [5] require(::Module, ::Symbol) at ./loading.jl:906

I suppose I need to pin something to work-around this. I have no idea what needs to be pinned (probably AbstractFFTs.jl)… DSP.jl should have just used upper bounds to begin with…

12 Likes

Note that the DSP maintainers actually asked for this breaking version bump of AbstractFFTs.jl to be able to fix it on the DSP.jl side.

Nonetheless, I agree that this could (and should) have been avoided by upper bounds.

5 Likes

So this doesn’t happen again, I created a PR that adds compat entries to DSP.jl. (https://github.com/JuliaDSP/DSP.jl/pull/321)

8 Likes

Hah and I created one to fix the old versions: https://github.com/JuliaRegistries/General/pull/5230

2 Likes

Nix does something similar to manage system packages, using symlinks and hashes. It works quite well. It would be trickier to make the approach work inside a single runtime where the dependencies are more tightly coupled, though.

The biggest issue, of mutually-incompatible transitive dependencies, seems to me to remain unsolved without attempting to isolate them. It occurs regardless of the policy on specifying upper dependency bounds.

1 Like

To make it better for developers, it would be nice if CompatCleaner had the option of doing the following when a dependency is released: (i) create a branch forking from the last release of my package (ii) create a pull request to this new branch, and, if tests pass, (iii) merge its pull request (iv) do a new release with a minor number.