Well, to me such a response is certainly having the opposite effect…
Browsing recent open and closed issues on github, of those that are missing versioninfo in the OP, few would imo be improved by including it. Requiring version info in a template would introduce unnecessary work for issue reporters in some cases and introduce extra noise for maintainers in some cases.
We have a relatively short list of things to check off when opening Makie issues. I’m not sure if it helps that much, often people don’t fill them in correctly. (Or maybe the list could be better)
I think the less critical or complex the surface that a PR or issue touches, the less process you need. We don’t need the same process to apply to docs PRs as to compiler PRs. Somehow our organizational principles should reflect that and keep overhead for the simple stuff low. It should be easy to get docs PRs in and probably a lot of people should be allowed to accept those.
GitHub has lots of features that could possibly be useful (or possibly just noise?)
for example, consider the following policy:
- anyone who has authored a commit can “approve” PRs (just an endorsement, not actually making it ready to merge)
- PRs getting some X number of unanimous approvals from community contributors are labeled with “needs maintainer review” or something like that
Apologies, this was not meant sarcastically, nor was I taking a dig at your suggestion. This was merely a lighthearted middle ground, which I didn’t think would be misinterpreted. However, tone is hard to convey.
That being said, a point that I was trying to convey is that — currently, the wall of text appears to be ignored by many, and it merely gets in the way. Users would just delete the text to frame their issue. By the time they are done pasting their output, the suggestion to include the version is long gone. Clippy, on the other hand, exists independent of the text box, and is less likely to be ignored if it emerges occasionally. Then again, I wasn’t suggesting this seriously, and an issue template also solves this issue by discouraging users from deleting text.
Perhaps a more realistic middle ground could be a
reprobot that tries to see if the text in a newly filed issue contains an
Exception, in which case it tries to detect the version number and platform from the text. If this fails, it leaves a message for the OP to provide this information. If there is no response for a week, the bot attempts to reproduce the issue on the latest stable version as well as the LTS on tier 1 platforms, and posts whether it succeeds in reproducing the issue. In case an issue is not reproducible and the original poster is missing for months, this makes it easier for maintainers to tag it as stale and close it.
This!!! I misclicked so many times!
Agree 100%. I would’ve loved to help merge a bunch of those but don’t have the rights, at least not for Julia proper. I guess that’s also because I never dared to ask over the years because I don’t contribute so much “hard code” to base (I work on packages etc.). Maybe we shouldn’t only give “core developers” merge rights but also “core community members”, who could clearly help with doc PRs? (And I want to say that irrespective of my own case.)
Maybe, if we don’t want to give merge rights to many people, we could instead have people without merge rights review docs PRs. Not sure if it saves time for a core contributor to hit merge if some other people have already put their “approved” stamp on the PR.
I’m not sure if one can give out partial merge rights, like only for branches with specific tags. I assume not, as then giving the tags would also need specific rights etc.
I don’t disagree, but:
- The few times I left my “approved” stamp on a PR, it felt like it was worthless (to the person merging the PR later). Maybe it wasn’t, but at least I didn’t get any feedback whatsoever that it was helpful. So I stopped doing it.
- Personally, I noticed that I tend to care much more about repositories where I’m a member/maintainer. But maybe that’s just me.
I agree with your points, I was thinking about what options you have when you’re not a member/maintainer, but yeah just putting approval under a PR might not really have any effect. If the maintainer doesn’t trust the reviewer they will check themselves anyway, saving no time. If they trust, then maybe they should have given merge rights to that person already.
With docs, I think one interesting aspect is that in my opinion core maintainers are not necessarily the best people for writing those. Sure, they understand the subject matter the best, but usually that means documentation turns out quite terse and lacking context information that a lot of casual users might not have. On the other hand, some subjects need a very high degree of accuracy when writing docs. But those are probably not things casual contributors would feel qualified to add to anyway, so it might not be a big issue in practice that people add slightly incorrect information about tricky topics.
All good! Indeed, tone is very hard to convey over text.
Not only discourage, but they can be built in such a way to keep the suggestive text around! A good example for that is IMO the way the
helix editor structures their issue reporting. (Please don’t submit issues to helix just to try the flow out ) Once you click on the
New Issue button, you get a choice between
Blank Issue (basically the existing default we’d have),
Bug Report, and
Enhancement. Github also has “Don’t see your issue here? Open a blank issue” text below, so even if we don’t add an explicit
Blank Issue template, it’ll still be there.
Once you hit
Bug Report, all the key information is asked for in a form, keeping what is asked around even after the textfields are filled in. I can see this kind of persistent “Hey we need this information” as the Clippy you’d think of, except there’s much less development overhead for maintainers because we don’t then have to maintain a GitHub Bot on top of the repo.
Automatically reproducing posted code is usually hard; not sure that’s going to be a viable strategy. I do agree that marking issues as stale would be good though, and it would be even more helpful for PRs that have had failing CI that’s not getting fixed. Closing outdated PRs instead of keeping them around is a good thing.
I agree that the experience is very pleasant! It seems to come from the experimental “issue forms”. Would a PR be welcome to set up one or two for the Julia repo? It seems a bit meta to open an issue to discuss the shape of issues
One barrier to merging documentation PRs is that we define public API by “it’s in the manual”. It would be so nice if we could decouple documentation from public API.
Yeah, this means that a documentation change can be breaking. Which is kind of messed up
Unfortunately, describing every bit of behavior of something in formal detail is quite hard Not to mention checking that the formalized behavior is actually correct & what you want it to be.
Still, documenting it is better than not documenting it!
We don’t (deliberately) merge PRs that cause new failures. Most current causes of failures probably passed CI when they were initially submitted, but intermittent issues or hardware changes or subtle interactions with other changes might cause them to (sometimes) fail; if you have enough flaky tests and failure of just one is enough to call the CI run a failure, then CI failures are common.
I’m not saying this is good, but chasing down the causes tends to be hard work and not that many people make this a core mission. If you want to find examples of PRs fixing flaky tests, a subset (probably only a minority) can be found with the “ci” label: Pull requests · JuliaLang/julia · GitHub
Sometimes, though, things get through due to inadequate test coverage and only discovered later.
While I’m not at all confident this is in any way meaningful, I amused myself by comparing Julia and Python:
- Julia, source files ending in
base/: ~130K LOC
- Julia, files ending in
test/: ~100K LOC
- Python, source files as discovered by
flspy=$(find . -name "*.py" ! -name "test_*"): ~400K LOC
- Python, test files as discovered by
flspy=$(find . -name "test_*.py"): ~470K LOC
So proportionately we seem to be in roughly the same ballpark, with
ntestLOC ÷ ncodeLOC slightly lower for Julia than for python. I do think there’s an argument to be made that because Julia code may be more composable, we might need a higher ratio of tests-to-code than Python, and we don’t have that.
FWIW, I do appreciate it when someone else reviews a PR before I get to it. But I doubt I remember to thank the previous reviewer(s) very often. So I see why it’s unrewarding.
A good analogy is, suppose a trainee has a draft of a manuscript but the language in which the manuscript is written is not the native language of the author. It’s really nice if the university’s writing center can take an initial crack at the manuscript, to smooth over some of the language. But that doesn’t mean that I as a scientist won’t find additional issues after the comments from the writing center have been addressed. Indeed, sometimes smoothing out language-specific bumps makes it easier to discover “deeper” problems and might net a larger number of comments than if the writing center had never been engaged.
I would go so far as to say that attracting more initial reviews from the community would be my favorite change to make to how we merge PRs. It would be really nice to have a system where such reviewers can build trust, and where approval by higher trust levels send increasingly-urgent messages to people with merge privileges that this PR is deserving of a final decision, etc. I am not sure whether that’s something we can set up, though.
If you want to upvote a new issue template:
I kinda liked the suggestion made by @adienes earlier, maybe GitHub has ways to make our life easier in this regard
Here are a few GitHub actions that could be useful:
- Label approved pull requests · Actions · GitHub Marketplace · GitHub to prioritize based on number of approving reviews
- Pull Request Stats · Actions · GitHub Marketplace · GitHub to find and evaluate reviewers