I found this thought-provoking. There is also a link to a paper about potential elimination of
jobs (fashion models among them, who would have thought?).
I find it a bit silly that the two dominant narratives are
ChatGPT dooming us all, irrevocably;
ChatGPT solving all of humanity’s current problems, including global warming, world hunger, cancer, etc.
In all likelihood, neither is true. We are talking about a computer program that generates text based on a huge amount of other text (without respecting rights of the original authors). Because the ability to write coherent text is correlated with education and intelligence for humans, most people are not in a position to assess this technology based purely on its output; our judgement will be heavily biased.
This, of course, does not prevent anyone from being a pundit on the phenomenon. Quite the opposite.
Amazingly, calling it “AI” works flawlessly, for about the 5th time in history, depending on how you count. People, especially experts, who were initially restrained enough to call it ML were quickly selected out from the discussion, because it is not sensational enough. It only took about a year for this to happen.
I admit that I do not know enough about ChatGPT to judge it in depth. That probably takes 10 years of work in a field different from mine. But as an economist, I vividly remember the rise of cryptocurrency and the associated narrative: in the beginning it was well understood that this is a pyramid scheme, but as time went on those simply pointing this out repeatedly were marginalized from the discussion, because hearing this wasn’t interesting enough for the nth time. Stakeholders eventually pressured entities that were (sensibly) holding out on cryptocurrencies to take it seriously… and then came the bad news about various players engaging in (surprise!) a pyramid scheme.
The possibility of the LLMs to cause significant harm is real not because they are smart, but because they are “stupid”. They can easily generate massive amounts of disinformation text, video, images, to influence people to the degree that global wars could be started etc. Humanity’s end may be closer than we think, and it will not be Terminator-style.
There is already a firehose of that on social media sites. I am not saying it is harmless (quite the opposite), but even without LLMs supply is not the constraint.