Should General have a guideline or rule preventing registration of vibe-coded packages?

Sure, the transition has to look like: General renames into “uncurated”, and current levels of arxiv / AUR -style review are initially maintained.

A new subset with stronger requirements / governance is created. Possibly the same mechanism is used for stdlib, with even stronger requirements.

Then, in due time, we elevate lots of widely used existing packages to “curated”. Importantly, “curated” packages should not be able to depend on uncurated packages.

Ideally, we soon reach a tipping point: Most packages most people use in their day-to-day lives (including transitive deps!) are curated.

Most people sometimes need to use a few uncurated packages, just like ordinary linux or mac users sometimes need to use AUR-equivalents or manually build software from source because their distro doesn’t ship it in a centralized way (or download some RPM / binary from a commercial vendor).

This is not an indictment of the uncurated package. For example, it is advisable on archlinux to skip the distro-provided builds of julia, due to different philosophies on use of almost-compatible system libraries vs vendoring with julia-specific patches.

Similar to arxiv / journals: Using arxiv results is totally fine to trust and cite if you read and checked the paper, or know the authors, or paper/author/institution have name-recognition; but an arxiv-only world without any journals or formalized institution of peer-review simply has worse scaling behavior.

And similar to arxiv / journals, one of the big differences between “curated” and “uncurated” would be where the buck stops: In a curated registry, the curator ultimately owns all package names and can fork from upstream as deemed necessary.

This does not fix the other issue that a split registry addresses:

There is value in having convenient access to somewhat crappy packages.

There is also value in having a curated selection of packages that you know are not crappy without having to review the code or having to know the author’s reputation.

These two goals are in conflict. Instead of some uneasy compromise that achieves neither, we can just have both.

This would be a very good requirement.

When reviewing a human-written thing, one at least knows that one makes the world better by giving good advice that the author can learn from, and there is an implicit social contract / economic equilibrium (you spent a lot of effort writing that, so I can spend a little effort reviewing it).

On the other hand, giving a human AI-slop to review without disclosure is an extremely hostile act. It breaks a social contract, and it breaks the old economic / game-theoretic equilibrium. It attacks the human dignity of the reviewer.

It is like spam: The correct answer to spam is not to faithfully engage, but rather to scorched-earth block the spammer and exclude them from the community. (unless you have too much free time, then 419eater-style responding in bad faith is even better at shifting the economic / game-theoretic equilibrium back)

5 Likes