The post “AI crap” is becoming more relevant every day.
I see good in automating mundane things e.g. Copilot producing and hopefully teaching what well-known boilerplate does, yet this touched on a downside of generative AI that I don’t really see people talk about. People are very preoccupied by the comparison of human versus AI performance, like identifying particular AI mistakes or arguing whether AI products are good enough to replace humans. I see that as excessive attention paid to the peak of human performance, but mediocrity is a common and crucial aspect of humanity.
Not all writing is worth going down in history, a lot of it is vapid product reviews and celebrity rumors. Not all art is preserved in museums, most are forgettable commissions for satisfying private inclinations. Mediocre products nevertheless have a vital role; they are opportunities for people to improve their craft and pay their bills while they do. Barring extreme nepotism, the greatest all grew through mediocrity.
Generative AI does not grow on its own, it parrots select products of human growth. The market is using it to take opportunities away from humans, a grand scale of negative feedback progressing into cultural stagnation. We may be momentarily satisfied by the products of the 2020s, but would we after a couple decades?
Ironically, as ML-generated content floods the internet, training models may become more and more difficult.
That’s why big tech companies are making licensing deals with big media companies with confirmed human-made content. OpenAI has more deals than lawsuits by this point: All the media companies that have licensing deals with OpenAI (so far) | Mashable. Personally I find it unsavory because the content creators had no say in it and could not anticipate this usage of their work, much like Github contributors and Copilot.
At least sold as such
Yes, my university launched a “private” GPT tool, trained only on “school content”. So after a couple of years, it will be training itself on its own content. :-/
The troves predating 2023 would be human content at least, and AFAIK these media companies have policies against AI generation and strong editing infrastructure to enforce it. Still, if that changes, stagnation feedback becomes an issue again. Some well-known outlets were caught using AI generation before, here’s Microsoft-owned MSN: ‘Useless at 42’: Did Microsoft use AI to generate an athlete’s obituary? (ktla.com)
To answer the question in the title of the post, AI can mean: advancing indolence.