JuliaLang/julia is AI slop now and I am heartbroken

Example: Add support to zero-width characters by ronisbr · Pull Request #29 · kahliburke/Tachikoma.jl · GitHub

I needed to fix a problem in Tachikoma.jl with zero-width characters. I use it a lot to represent mathematical operations and structures (like time-derivative, unit vectors, etc.). I knew what I needed to do: create a buffer with attached suffixes to the current printable character. I literally asked:

Claude, please fix the implementation on the files buffer.jl and terminal.jl so that it can support zero-width characters. You must add a suffix string to the Cell structure to contain those additional characters to the printable one. Also, fix any widget that directly write to the buffer. In terminal.jl you need to implement an additional check in comparison to verify if the cell has changed due to the zero-width string. At the end, add some tests to cover the new code.

And boom! I got a working code that I needed to make only minor updates without effort. How is this different from hiring a junior programmer on UpWork (I obviously do not have the budget to hire an experienced one…) to do the task for me? I literally did not have time to implement this myself currently since I am working on porting our satellite simulator to the new framework.

7 Likes

I don’t think your post was mean-spirited as some implied. I agree that a lot of PRed (partially) generated code may be classified as slop: either because it was reviewed but the “slop” was deemed good enough, or it wasn’t reviewed by a human at all, or perhaps some reason that I cannot think of atm.

I would like to see some data down the line whether or not this slop causes problems down the road. Has the frequency of “silly” bugs increased? Does it have impact because no one is really an owner of the generated code and hence there are no volunteers to fix problems, etc?

3 Likes

As much as I like progress and agree that this post is a tad hardline, “embrace the future” and “it’s just a tool” are also frequently used to dismiss drawbacks and unsustainability as casually as the poster dismissed providing at least a partially empirical argument. I prefer to discuss specific statistics and moral stances, otherwise people are just talking past each other. The poster could very well have a sound argument, but they’re openly uninterested in giving one and don’t owe one to anybody in a forum in which they’re posting for the first time. If people in this forum or some developers wanted to have a conversation about curbing AI uses or harms, it was unlikely to happen on this topic.

17 Likes

every PR that gets merged has been reviewed by a human!

2 Likes

Not the same thing: PR being reviewed does not necessarily imply that all the code was reviewed.

On the other hand, I think in the near future not having code reviewed by an AI will be just a missed opportunity of possibly improving PRs before they get merged.

7 Likes

Seeing Claude commits in the Julia repo is not really an indication that AI is being used in Julia development at a higher rate than any other software, or without any oversight/review. As others mentioned, such commits only appear if one has Claude manage making commits itself vs. making them oneself after it makes edits.

At this point, Claude Code usage is pervasive throughout many open source libraries (and commercial software), even if it is not apparent from commit histories. So while I can understand an aversion to “vibe coded” software, one should be aware that it is highly likely that if you are using any software these days you are relying on code that was written with an AI somewhere in your stack (hopefully in a supervised, well-reviewed manner though and not completely vibe-coded). As an example that is a much larger community than Julia, see Pytorch, which even has a standard CLAUDE.md and custom skills in the main repo: GitHub - pytorch/pytorch: Tensors and Dynamic neural networks in Python with strong GPU acceleration · GitHub

3 Likes

Most posters overlooked this. The OP’s objection seems to be more moral than technological. It’s a new version of the Torvalds / Stallman divide years ago. I’m more of a practical person willing to adopt expedient technology, but I see where the OP is coming from.

8 Likes

While I agree there is a moral issue, I do not think it is a new issue. The fundamental question is if we should shift work previously done by humans to tools.

Is it moral … ?

  • To use a wheel to move around?
  • To harness the work of animals for transportation?
  • To use a printing press to duplicate and disseminate writing?
  • To record and replay audio?
  • To communicate with each via electromagnetic signals over great distances?
  • To build factories and assembly lines where workers operate machines rather than craft individual items?
  • To use machines to perform mathematical calculations?
  • To use compilers and interpreters turn human language into machine code?
  • To use software that we did not write nor know the details of what they do?

I do not mean to trivialize the question here nor equate the morality of these prior questions with the question before us. Is it moral to use an artificial neural network to generate code?

However, I do see a general theme. Is the development and use of technology moral?

If you are reading this, then you likely have accepted some of these technologies and deemed their use as moral.

While this new technology may appear to be fundamentally different and more autonomous than the last, it also still based on prior human creations and efforts much like the other technologies I mentioned. Indeed these technologies are not fully autonomous.

Claude and other agent models did not choose to work on JuliaLang/julia, a software developer did. These tools needed to be directed to work on a problem and guided about the approach. They are using weights from prior patterns they have seen and extrapolating those patterns to new problems, yet they are known to be error prone. The discipline of software engineering, however, has developed systems and methodologies to address the error prone work of humans. We have systems of review and automated tests to check and verify expected performance. We now apply those systems of error correction to the output of these agents.

The term AI slop implies a lack of that system of error correction. While the technology could be used by those inexperienced with the methodologies of software engineering, bypassing best practices, that’s not what I see at all in the commit log.

As with any technology, we must consider how and when it is used. Of the many communities I interact with, the Julia community is among the most conservative in this issue.

From the JuliaCon 2026 CFP page:

The committee asks that all proposals are written with the same care, attention, and effort that the potential speaker would like them to be reviewed with. In particular, we ask that proposals not be generated with GenAI tools such as ChatGPT, Gemini, Claude, etc. We ask that submitters respect the time and effort of the reviewers by writing in their own words.

We ask that all reviewers write their own review comments, i.e. that GenAI tools such as ChatGPT or Gemini are not used for the creation of review feedback.

In contrast, my employer is actively encouraging the use of this new technology is all of our work. They see this is a moral imperative to accelerate the pace of biomedical research.

19 Likes

This topic was automatically closed after 6 hours. New replies are no longer allowed.