chatGPT policy needed?

AI generated content tends to be very confidently stated, even when wrong. There are some humans with this problem too, but with AI it’s constant—it’s always confident and at this point wrong more often than not. Moreover, it’s not wrong in the same ways that humans are. AI tends to do things that look sophisticated but are just bad copy-pasta from something on the internet that was a correct answer to some other question, but which is wrong—either wildly or subtly—with respect to the question at hand. Worse, AI can generate a lot of confident, incorrect answers. If we don’t take a hard stance against AI generated content, it would be very easy for it to become impossible to distinguish bullshit from real, useful answers. (@DNF already said the same better than I did, but I wrote this before I saw that post.)

It may come to pass that some future AI can answer programming questions confidently and correctly with enough reliability that we would actually intentionally deploy it to help users, but we are not there yet. And even when that does happen, I think we’d want to be transparent about it so that people know that a bot is answering their question, so I don’t think that would actually change the policy since AI output that is clearly marked as such is already ok.

14 Likes