I am kind of confused here. To my understanding, the main argument for testing versus Julia nightly is to find bugs in the Julia language itself (or some other core dependencies), not find bugs in your own package. The only builds that fail in nightly and are truly because of my package must also fail in all other builds. But then, of course I am checking every failure in the normal builds so the nightly tests add no value here.
Within this whole discussion I am focused in testing non-breaking nightly releases i.e., not going from 1.x to 2.x. In this scenario of non-officially-breaking changes, we are hoping that Julia’s tests will catch any unintended breakage. Our point of testing nightly is to just further help ensuring this fact and to further help the main Julia development team. At least, that’s the only reason I ever found for doing nightly tests.
As I personally do not have the capacity to go through the many false positives of the nightlies, I cannot justify the computing cost of running nightlies. In fact, even if I did have the time to go through the nightly failures, I am not sure that would justify the extra computing cost as my packages only use the formally exported, and likely very well tested API. This means that in the overwhelming majority of cases any Julia bug I could detect would have already been detected by the main language’s repo tests.
And to re-iterate, this argument holds for the front-end packages that I am involved with, which means that :
this is not at all tricky in my eyes. Some package that believes that they are not using the well tested and exported API should run nightly. The majority of packages don’t have to though.
I agree, and would never argue that there is 0 value. However, I am comparing this value with the associated impact it has, and find this value much too low to justify the cost.
No. Julia 1.10 moved to use libcurl 8.0.1. Now all packages that have compat 7.xx (almost all I would guess) will fail to run in Julia 1.10 but run on Julia 1.9
If breakages due to using internals are known unknowns, we also have to account for unknown unknowns. Aside from @joa-quim’s examples, there are many instances of packages making relatively benign assumptions about functionality which breaks with a minor Julia update. "failed to start primary task" with Julia 1.9 and nthreads(:interactive) > 0 · Issue #21 · JuliaFolds/FoldsThreads.jl · GitHub is a recent example that comes to mind. Packages which are somewhat sensitive to inference quality (e.g. StaticArrays), subtyping, inlining etc are another.
Now, does that mean everyone needs to test on nightly? No, I think we’re in agreement there. But testing alphas/betas/release candidates when they come out (which to your point, would have a much smaller impact and fewer false positives to look through)? I think packages which are at risk for breakage due to one or more of the aforementioned reasons should do that. My question was whether doing so is even possible at the moment, because my understanding is that GHA will happily run jobs with 1.9rc well after 1.9 stable is released and the yaml format doesn’t provide a lot of flexibility to say “only run this job if there’s a newer pre-release than the current stable release”.
Switching gears, another concern that was touched on above is that the mechanism for checking for bugs in the language (PkgEval) has its own “boy who cried wolf” problem. If the last line of defense is not extremely reliable or frequently kicks in too late in the release/dev cycle, the pragmatic solution is to have a defense in depth approach. That has generally taken the shape of packages running CI on nightly. It would be great to have a more sustainable alternative to this and I’d love to hear how PkgEval could be improved to avoid some of the issues noted in this thread, but my (perhaps incorrect) impression is that there are no silver bullets there.
No. Julia 1.10 moved to use libcurl 8.0.1. Now all packages that have compat 7.xx (almost all I would guess) will fail to run in Julia 1.10 but run on Julia 1.9
(Re: this specific issue, unrelated to this thread, maybe someone could trigger a JuliaRegistrator PR fixing those compat issues on all matching repos? Similar to what Tim Holy did when he automatically generated PRs deprecating SnoopPrecompile => PrecompileTools)
I haven’t looked at the Docker Images to know how they are built and hosted, but maybe we could change the infrastructure to follow Juliaup’s conventions so you could denote that you want the image for release, etc.?
I believe that exists in some form already. The problem is that stable and pre-release need to be run as separate jobs/steps, but if they end up with redundant Julia versions there’s no way to communicate that the latter one should stop when running on GitHub Actions. I believe we’d some way to conditionally run the step for pre-release versions if there is new pre-release available, but that might go beyond what the declarative pipeline format in GitHub Actions supports.
Back when we had TravisCI and AppVeyor rather than GitHubActions,
we would run those set to “Warn” and they wouldn’t cause failure states in the badges, just let you know.
There has been a long standing issue for github actions to add this.
but last i looked the github actions team seemed confused as to why anyone would want this.
That looks an unambiguously breaking change in Julia, both technically and practically. Almost 2k packages depend on libcurl_jll, are they all going to be broken in their current state?
I think at least a part of your problem is that a recent GDAL_jll has compat NetCDF_jll = "400.902.5" and all such versions of NetCDF_jll has compat LibCURL_jll = "7.73.0".
The only thing we have to do is fix compat bounds in the registry. Keeping complaining about this is not helpful, we’re merely affected by bizarre, to say the least, decisions of upstream developers who don’t follow semver. Also, Curl v8 completely broke compilation of R, what should they say?
There is only one actual “change” in this release. This is the first curl release to drop support for building on a systems that lack a working 64 bit data type. curl now requires that ‘long long‘ or an equivalent exists.
What can actually break on very old hardware I suppose. But yes, it seems it was mostly a misfire.
I misread, I thought the code would not work on 32bit machines, but it seems it will only stop compiling? What clearly is not a semver breaking change.
It’s not even that, 32 bit supports 64 bit variables just fine, I don’t remember of the top of my head a platform that explicitly doesn’t support 64 bit that one would use libcurl in.
I agree that it’s a bit silly to move from v7 to v8 for no reason, however it’s also silly to complain about “upstream developers who don’t follow semver”. At the end of the day, the maintainer is the one who makes and names releases, and there’s not even a guarantee that the version will have anything resembling the format promoted by semver. Especially for a C project, since there’s no widely-used package management solution for C, so semver doesn’t mean as much to C developers.
Furthermore, some are actually of the opinion that semver allows for releasing non-breaking changes as breaking, the spec is not very clear.
Now that I think about it, even for Julia projects, which hopefully all try to follow semver, I’m not sure that blindly trusting the maintainers to follow it correctly is such a good idea. Semver bugs, or ambiguities, are inevitable.