I can’t rule out something in your unknown code, but IIRC, Julia v1.10 is what parallelized some parts of sysimage building, custom ones being what PackageCompiler is currently mainly responsible for. 1.11 made quite a few changes to internals, so it’s possible that an inadvertent interaction compromised custom sysimages, which is still undiscovered according to the relevant Github issues. Note that it was already possible in v1.10 for memory to run out and stall the system during multithreaded building of large sysimages. In general, multithreaded memory allocations multiply the burden on the GC, and the line between a program’s OutofMemoryError or a bona fide memory leak isn’t obvious.
In my opinion, there are 2 ideal ways this goes for independent development:
-
prioritizing reliability: Dependencies are strictly locked down to versions known or expected to work by
[compat]. If what was expected to be an acceptable update significantly regresses or outright breaks a feature, a patch quickly further restricts the dependency. Careful work and releases take the time to support expected breaking changes and work around regressions. -
prioritizing availability: Dependencies are mostly lower bounds for support and automatic upper bounds for major versions. Users risk running into issues that tests didn’t cover, and they are encouraged to report issues or contribute fixes, tests, warnings, etc. In one very conspicuous place, other users, especially new ones, are informed of the discovered caveats of usage and practical workarounds if any.
As strongly as I disagreed with people here on (1) being the only acceptable approach or how SemVer is used and interpreted, I do agree that transparency is beneficial. I can only speculate that the much smaller development team and occasional contributors, both volunteers AFAIK, makes even this much difficult, let alone the sort of development work in (1). Things can be better for users, but the necessary conditions don’t exist on demand alone.