This highlights another thing: even if AI can answer run of the mill programming questions at some point it will still remain very far from the expertise of someone like Chris answering a question about his own code in an area where he is one of the world’s foremost experts. At the point we could consider an AI bot to answer beginner questions but you really wouldn’t want to let the bot answer questions like this one.
Maybe I should use ChatGPT to extend my responses haha. I think sometimes the terseness comes off as not wanting to be bothered, when really it’s “I’ve got two minutes left on the train so you either get this quick note or nothing”. Slap a bit of ChatGPT around the real answer and you have yourself a paper worthy response.
The lack of terseness of ChapGPT bothers me.
Walls of fluff devoid of content is grating to read.
I know a few real people who are just as terse Makes me wonder if it would be easier for ChatGPT to pass the Turing test, or for a human to pass the “reverse Turing test”, i.e. successfully impersonate ChatGPT?
We are now living Aldous Huxley’s worst nightmare, when the truth is drowned in a sea of irrelevance.
Agreed.
Tongue in cheek, but have you told ChatGPT to make its responses more terse or concise? Had the same issue and would tell it to either outline its response in concise bullet points or constrain itself to only one or two sentences for a response. Has helped a lot.
I fully agree. It is great to have you here Chris. I keep the fingers crossed. To explicitly articulate it, this comment of course is all on a positive side! Again, I really keep the fingers crossed. :- )
Error: “Chris” is ambiguous. Response type unknown.
Still, keeping the fingers crossed. :- )
Stefan’s copy-pasta remark shamed me into introspecting about how much of my own thought is actually some form of copy-pasta.
Stefan’s copy-pasta remark shamed me into introspecting about how much of my own thought is actually some form of copy-pasta.
There was a Twitter thread (which, now that I want to reference it, I cannot seem to find) where an expert in some variant of ferromagnetics asked ChatGPT some questions. The initial answers cited legit experts in the field, but upon deeper questioning the AI started citing sources that it was making up, i.e., that didn’t actually exist. Oh, they sounded plausible enough that news media would be easily fooled, and the expert was at first but she was very thorough.
We need the academic equivalent of Azimov’s Laws for AI Chatbots I guess.
off-topic but amusing to read regarding jailbreaking the guardrails of chatgpt:
Yes, a policy is needed, but I don’t think the accepted solution goes far enough. It suggest banning only if the user has contributed nothing but AI-generated content. A simple way to game this is to post a single comment that is “real” and then to proceed to post “fake” comments thereafter.
I think the policy should be analogous to the policy on cheating on university examinations: one strike and you’re out. A strike means posting AI-generated content without attribution (again, in the university, that is plagiarism, and its consequence is dismissal).
How to recognize the content is a much thornier matter. I have no idea on that. But, as an occasional reader of this forum, I can say one thing for sure: I won’t be coming back here if I get the idea that AI-generated content is taking over. It’s not that I worry it’s wrong. It’s that the whole place will stink when there are even occasional skunks here. And I’m not here out of necessity, but rather interest. (Ditto my use of Julia.)
I didn’t understood the accepted solution as the decided policy, but only as a temporary solution to this discussion.
The real policy will be that, what we see in the updated terms of use. It is already there? I didn’t checked.
I don’t think the responsible people will have any problems to ban somebody if they think it is needed or better for all of us. On the other side I am happy, that I received some warnings before I was banned, so I was able to check my behavior and do it better, perhaps I wouldn’t be here anymore. I don’t know if that would actually be a problem (no need to answer on this! I don’t want to know!).
Being restrictive is good but not being always like that is better.
Eh, policies like these aren’t like computer code. The mod team is human and members here are human (or, well, they should be and we treat them as such unless it’s abundantly obvious) and we can see through things like this. Policies just must be clear in their intent so that folks can understand and respect the out-of-bounds areas, even if the edges are a little fuzzy at times.
We’ve not changed anything yet, still working on what that looks like and how it fits with the existing guidance.
Please add some “Language diversity” exceptions:
my proposal:
Ethically accepted AI tool uses: (including any GPT based tools)
* (Improving) Translation, Spell checker
* Improving Politeness / Non-Violent-Communication style
Reason:
- I am not a native English speaker. My English is very basic ( ~ raw ) And I find it very difficult to express myself politely.
- One of my favorite books “Polite Technology” . And I know that politeness is very important in all types of communication, even technological communication.
- I can use ChatGPT as a translator or improve/enhance my text politeness.
- IMHO: AI can be used to help people perform tasks that would be difficult or impossible for them to do otherwise.
disclaimer: I have used ChatGPT/Deepl/Google tools for improving this comment.
Real Life - ethical example:
An extract from the “Polite Technology” - " BUILDING CONTRIBUTOR ENTHUSIASM" chapter:
People who start contributing to open source usually have a grain of enthusiasm already. Your job as the existing contributor is to nurture that bit of enthusiasm into something bigger.
- Be generous in your praise
- If you have to go through code reviews, help when necessary and help them understand the goal, so that they don’t feel like they’ll never be done.
Here’s an example of an especially friendly thank-you:
Thanks so much for this PR! It’s such a big lift for this project and I
really appreciate you taking the time to make it. Hope to see you around!
My ChatGPT question:
please write a polite alternative for this Github technical communication text
“Thanks so much for this PR! It’s such a big lift for this project and I
really appreciate you taking the time to make it. Hope to see you around!”
And this is a ChaptGPT Suggestion:
““Thank you very much for submitting this PR. Your contribution is greatly appreciated and will be a significant benefit to the project. I appreciate your time and effort in creating it. It was a pleasure collaborating with you and I hope to work with you again in the future.””
So my ethical dilemma:
- If I indicate that this text is generated by ChatGPT, it loses its original purpose of politeness.
- Maintaining open source software is hard … And we have to use tools automating emotional labor. ( see “Polite Technology” )
I don’t think the issue is with people using various tools (including ChatGPT) to assist them with a post. The bigger issue I have is with what I’ve been seeing circulating around other forums, where people are testing out Chat Bots to auto reply to questions - or manually pulling a “ChatGPT says ‘…’”
The point of a user forum is to get thoughts, commentary, ideas from other people. If people want an AI response to ask a question like, “what’s the best way for me to do X” - then they should just use that engine directly and skip the forum. If you use an AI to help you in your response, I feel that’s fine, but even then, of the two responses you provided:
“Thanks so much for this PR! It’s such a big lift for this project and I really appreciate you taking the time to make it. Hope to see you around!”
and
“Thank you very much for submitting this PR. Your contribution is greatly appreciated and will be a significant benefit to the project. I appreciate your time and effort in creating it. It was a pleasure collaborating with you and I hope to work with you again in the future.”
I prefer the former. The second just sounds artificial on a forum. I would have instead gone with a prompt like:
Write me an alternative version of the following message, that is appropriate for an informal internet forum: “Thanks so much for this PR! It’s such a big lift for this project and I really appreciate you taking the time to make it. Hope to see you around!”
Which results in:
Hey, thanks a ton for this PR! It’s a huge help to the project and I really appreciate you putting in the time and effort to make it. Look forward to seeing you around on the forum!
Which still feels a bit awkward, but much better than the overly formal response. But now you’re just shifting the problem to knowing what to ask the AI… doesn’t seem worth it TBH.
Here’s the only policy right now; it’s in the TOS and has been… forever, I think.
- the Content is not spam, is not machine- or randomly-generated, and does not contain unethical or unwanted commercial content designed to drive traffic to third party sites or boost the search engine rankings of third party sites, or to further unlawful acts (such as phishing) or mislead recipients as to the source of the material (such as spoofing);
I think this pretty well covers what we don’t want.
Using a machine to assist in generating your content is fine. This is very much a know-it-when-you-see-it sort of thing and @ImreSamu you certainly do not need to put a disclaimer on comments like the above — that has no hint of the sort of thing that’s problematic about ChatGPT in forums like these (like its confidently incorrect blithering and the potential for an overwhelming amount of it, for example).