Makes sense, and also tells me the Discourse software has been designed by a
thoughtful human, rather than vibe-coded!
I agree to add a guideline to show if the package uses vibe code, but I wonāt make it mandatory. After all, whatās the real point? How can we really tell if Claude is better or worse at programming than developer X? Iāve even met folks who are much less skilled in programming than Claude! I am pretty sure we can find very bad packages in Registry right now.
At the end, all packages in Julia general Registry comes with no warranty. Vibe coded or not, you need to check and test if the package produces what you want.
Vibe coding something and planning what to code are two different things IMHOā¦
No, but human learning and LLM ālearningā are different in kind.
Iām not seeing that at all. In fact, the number of repositories that make use of claude has exploded. Most of the comments you made in this thread seem to be based on feelings, not factsā¦
That in no wise contradicts my point. I have not stated that the projects banning
the āAIā tools forms a growing share of all projects.
The amount of spam and slop will, without a doubt, always exceed the quantity of valuable
stuff.
Why? At the end, we get information, make correlations, and adapt our current knowledge to a new problem. Thatās not exactly how LLM training/execution works, but it is also not that different.
Yes⦠In fact, in the Julia repository we have right now, Claude is one of the top committers. Were the PRs vibe coded? Was it only English check? Nobody knows⦠What about other LLMs that do not add the Co-Author information in commits?
I see this in very big, sensible projects like OSes. I am not aware of a community like Julia that has banned or put guidelines in what people must or must not use while coding their own packages.
What is slop exactly?
Yes, sure, but it is up to the maintainers of the project to decide if they accept it or not.
We are facing the other, much more difficult side: lack of contributors!
One very important example: I created PrettyTables.jl, and it has been mostly a one-man show. However, last month, an amazing developer used AI tools and created a Typst back-end people asked me for years. It is working perfectly. Now, Julia has a good way to write Typst tables that would not have been available if LLMs were not a thing. In this case, do I need to add a badge saying that PrettyTables.jl is āvibe-codedā?
@Mikhail_Kagalenko, Iād encourage you to pay particularly close attention to @Goerzā comments as he is a core maintainer of the General registry. In particular, see that there is a policy about vibe-coding:
The guidance on General links to a prior discussion here:
What responsibility? AFAIK, all open source licenses state clearly that the software comes without any warranties or liability for damages. The responsibility lies with whoever is using the open source, free code.
Your moral responsibility to the Julia community. This is not about legal terms, this is about our expectation of packages in the General registry passing a bar of being maintained in a sensible way. We donāt want to be buried in packages that look good on the surface but break as soon as you poke them even a little bit (aka āslopā), nor do we want packages that are unmaintainable because no human developer has a clear picture of their design (aka āvibe-codedā)
Hum, not good. There are a lot of packages in the Registry that were not vibe coded, that are unmaintained, that do not work, that have breaking changes in minor releases, and probably would be better if AI was used so that the maintainer can do things easilyā¦
Registry is not curated. Besides very few checks, anyone can submit packages. So we need to change this or let each developer decide what they want.
How many of the >12k packages in the general registry currently compile on the latest and LTS versions of Julia, have tests that pass, over 50% unit test coverage, are well documented, and have been maintained (perhaps the bar being, have addressed any identified GitHub issue within the past 6 months)?
Have you tried Tachikoma.jl?
Yes, the General registry is not curated. But thereās a key exchange that happens:
- By registering a package, youāre taking something valuable from the community ā a name in the default Pkg registry.
- In exchange, youāre providing functionality!
Many folks here take the registered packages seriously ā core devs evaluate test errors on new releases, many other package maintainers help ensure version upgrades work smoothly throughout the entire ecosystem, Iām currently working on tooling to help automate security advisories and updates, and so on.
Even in the most permissive of ecosystems, itās still a major community problem when a package has serious issues.
That is not entirely true. There are both automatic and manual checks for packages that are submitted for registration, see, e.g., the discussion in
Hopefully many, as Julia Computing runs automated checks about this whenever releasing new versions.
There are minimum requirements for packages at registration time, and stricter requirements for 1.0 versions. I would be very open to making automated checks for new registrations stricter than they currently are. Running tests and checking for 50% unit test coverage in particular is something thatās been long on my mind. Thatās a discussion to be had separately, though
There is an expectation for registered packages that they continue to be maintained. If they become unmaintained: if nobody is submitting PRs and they continue to work, not problem. When PRs are submitted but not reviewed or merged, as registry maintainers we actively facilitate new co-maintainers being added to old projects, in whatever form is appropriate. In the most extreme version of this, we have a form of āeminent domainā where an unmaintained package is forked to a new organization with new maintainers, and the the registry is modified to point to that new fork ā even without the involvement of the original package author (if they were hit by the proverbial bus and have been unresponsive despite several months of communication attempts across different channels)
The bottom line is that the General registry is not free-for-all (as discussed in the intro to the previous discussion about vibe-coded packages), but is a shared community resource
Cool, sounds like youāve got it all covered then. Anything Iāve submitted to the registry is high quality and being actively maintained.
Although Iām still curious what actual percentage of the 12k packages pass the bar I proposed.
Yes, sure! Me too! Those are very important things that probably most of the devs look for. I just do not understand this āyou shall not send AI vibe-coded package hereā. You can have both! Actually, IMHO, it is easier today to meet those requirements if you use AI.
By ācurateā I mean that someone is downloading the package, running some code, checking if documentation is OK, etc. This does not happen to a very high percentage of the registered packages. Let alone the new releases. Nobody checks for breaking changes in API and other things.
However, thatās fine! We are developers devoting our free time. But since we are developers devoting our free time, Julia community should be careful with those kinds of rules.
For passing tests (against julia-dev), itās about 50%:
Did this plot remove packages with tests like:
@test true == true
?
Just to be extra clear, again: Iām not seeing any issues with the Tachikoma.jl package from which this discussion spun off. You definitely seem to have sufficient experience to use the agent-workflow effectively. So: more power to you!
Thereās absolutely no issue with LLM-assisted coding. Itās great when it can make people more productive, and I also agree that when used correctly, it has great potential to improve the quality of packages. For Tachikoma in particular, that seems to be the case.
What weāre trying to protect against at the ecosystem level is huge quantities of low-effort code swamping the registry.

