While I agree there is a moral issue, I do not think it is a new issue. The fundamental question is if we should shift work previously done by humans to tools.
Is it moral … ?
- To use a wheel to move around?
- To harness the work of animals for transportation?
- To use a printing press to duplicate and disseminate writing?
- To record and replay audio?
- To communicate with each via electromagnetic signals over great distances?
- To build factories and assembly lines where workers operate machines rather than craft individual items?
- To use machines to perform mathematical calculations?
- To use compilers and interpreters turn human language into machine code?
- To use software that we did not write nor know the details of what they do?
I do not mean to trivialize the question here nor equate the morality of these prior questions with the question before us. Is it moral to use an artificial neural network to generate code?
However, I do see a general theme. Is the development and use of technology moral?
If you are reading this, then you likely have accepted some of these technologies and deemed their use as moral.
While this new technology may appear to be fundamentally different and more autonomous than the last, it also still based on prior human creations and efforts much like the other technologies I mentioned. Indeed these technologies are not fully autonomous.
Claude and other agent models did not choose to work on JuliaLang/julia, a software developer did. These tools needed to be directed to work on a problem and guided about the approach. They are using weights from prior patterns they have seen and extrapolating those patterns to new problems, yet they are known to be error prone. The discipline of software engineering, however, has developed systems and methodologies to address the error prone work of humans. We have systems of review and automated tests to check and verify expected performance. We now apply those systems of error correction to the output of these agents.
The term AI slop implies a lack of that system of error correction. While the technology could be used by those inexperienced with the methodologies of software engineering, bypassing best practices, that’s not what I see at all in the commit log.
As with any technology, we must consider how and when it is used. Of the many communities I interact with, the Julia community is among the most conservative in this issue.
From the JuliaCon 2026 CFP page:
The committee asks that all proposals are written with the same care, attention, and effort that the potential speaker would like them to be reviewed with. In particular, we ask that proposals not be generated with GenAI tools such as ChatGPT, Gemini, Claude, etc. We ask that submitters respect the time and effort of the reviewers by writing in their own words.
We ask that all reviewers write their own review comments, i.e. that GenAI tools such as ChatGPT or Gemini are not used for the creation of review feedback.
In contrast, my employer is actively encouraging the use of this new technology is all of our work. They see this is a moral imperative to accelerate the pace of biomedical research.