chatGPT policy needed?

I’ve seen some recent posts involving chatGPT “answers” and
I am concerned about the confusion of actual Discourse discussion
versus AI “fake discussion” especially in searches of our forum.

A number of threads got polluted by chatGPT “answers” that
we’re very difficult to tell were not real or even correct. I’ve
muted those for myself but thought it might make sense to
have some sort of policy for chatGPT and the ilk.

As is, it feels like the fake search engine result sites that say
“everything you want to buy about topic”…

1 Like

How do you know which “posts” were chatGPT-generated?

They said that they were. Search chatGPT. It is possible that I
did not understand correctly but…

1 Like

We could send each post to GPTChat and ask it if it wrote the post. :wink:


In that case the problem is limited, imho. It’s the ‘non-declared’ AI posts that are the real issue. And there have been some of them, too, afaict.

I mean, if everyone starts quoting ChatGPT it becomes tedious of course, but for now it is probably the novelty of it that is causing people to post them (as have I). I think that anyone who quotes ChatGPT should definitely mark it clearly.


I say “Yes, we need it (the policy)”.
I did some experiments with chatGPT and it’s quite awesome, but the technical solutions need some fine tuning. Or more than fine. Those questions I tried did led me to a proper solution but the answers weren’t usable on it’s own. It’s more like a google search with many outdated answers or slightly different questions and therefor different answers, but search results help you on your way, like finding the proper packages and documentations.

So, the policy should at least be, it must be declared clearly!

Here’s a possible policy: if you’re going to post AI-generated content, clearly mark it as such and don’t try to pass if off as a real user. Unmarked AI-generated content will be considered spam and if it’s your only contribution to the forum, you will be banned and blocked like all other spammers.


What is the possible problem? For example, someone posts a question, perhaps some code they wrote that they thought would work but is erroring and they want help to fix it.

Possible outcomes:

  • a real person replies with no solution / more questions (most likely outcome!)
  • a real person replies with a correct solution
  • a real person replies with an incorrect solution
  • a real person posts (pastes) AI-generated solution which is correct
  • a real person posts (pastes) AI-generated solution which is incorrect

Conditional on a reply with a proposed solution, which may or may not be correct, what is the danger in it being direct from a human brain or from AI? Why does the source matter in this case?

The biggest problem I see is people blindly trusting anything that is AI-generated, but in this context I do not see why it would be concerning from a content moderation standpoint.

1 Like

No, I think the biggest problem is AI content which is not marked, and which is passed off as the answer of the poster themselves.

Wrong answers from ChatGPT tend to be more misleading than wrong answers from humans, because the ChatGPT answers often are very well-written in a convincing style, with clean code, appearing well-researched, and going in-depth, but wrong or misleading below the surface level. It is much easier to spot poor answers from human posters.

In fact, I would compare ChatGPT’s answers to those from a well-informed human who deliberately tries to mislead in subtle ways. This is much worse than a simply wrong human answer.


It’s also just annoying to read. ChatGPT is not very succinct, and I don’ t want this forum to get overrun with boilerplate writing.


In my view, parroting a large language model simply goes against good faith participation here. Instead of engaging or accusing folks of doing such, though, please just flag such suspected posts for moderator review — just like we already direct you to with regards to one’s tone.

It’s not unlikely that newly registered users will post LLM output in an effort to earn trust in order to do worse things than just post more LLM output.


AI generated content tends to be very confidently stated, even when wrong. There are some humans with this problem too, but with AI it’s constant—it’s always confident and at this point wrong more often than not. Moreover, it’s not wrong in the same ways that humans are. AI tends to do things that look sophisticated but are just bad copy-pasta from something on the internet that was a correct answer to some other question, but which is wrong—either wildly or subtly—with respect to the question at hand. Worse, AI can generate a lot of confident, incorrect answers. If we don’t take a hard stance against AI generated content, it would be very easy for it to become impossible to distinguish bullshit from real, useful answers. (@DNF already said the same better than I did, but I wrote this before I saw that post.)

It may come to pass that some future AI can answer programming questions confidently and correctly with enough reliability that we would actually intentionally deploy it to help users, but we are not there yet. And even when that does happen, I think we’d want to be transparent about it so that people know that a bot is answering their question, so I don’t think that would actually change the policy since AI output that is clearly marked as such is already ok.


The singularity will occur when AI chat bots begin arguing with each other.


Hah, I don’t think the “bullshit singularity” was what Kurzweil had in mind, but of course it’s what we’re going to get, isn’t it?


There are (as always) important differences in scenarios

Trying to pass off AI-generated content as your own or another real person’s. Or what is functionally equivalent, posting in such a way that it is easy to assume that it is not AI-generated.

Uncritically posting AI-generated commentary on Julia (or anything else really), even if it is marked as such, is obviously a problem. It should be discouraged, or the post flagged so that a warning or caveat can be posted.

Posting AI-generated content in the context of discussing it’s performance and usefulness. I think it is important to label this as suspect, and if it is wrong, clearly mark it as wrong. An analogy is examples of incorrect or poorly written code in a programming book. The convention is to mark them clearly as bad code. I did not do this in my previous post of ChatGPT content. If you are new to Julia and looking for answers, you have no way to distinguish the correct answers from the nonsense.
EDIT: I edited the other post on ChatGPT to clearly mark correct and incorrect answers.

Fair enough :+1: I was playing devil’s advocate here as I don’t feel strongly either way. I definitely agree that the current edition of ChatGPT over-projects confidence.

I think this is unlikely to become as much of a problem here as it might in other places (eg. StackOverflow) as the incentives here are basically nonexistent for such behavior.

1 Like

A post was split to a new topic: How does chatGPT work?

Given the propensity of confident-sounding/technically-wrong answers, and the limited time of human moderators, any policy that does not ban it altogether may unfortunately end up being practically untenable :man_shrugging:

Here’s what ChatGPT has to say on the matter:

It’s not for me to say whether or not the Julia Discourse forum should implement a policy regarding posts and comments generated via ChatGPT. That would be up to the administrators and moderators of the forum to decide based on their goals and guidelines for the community. It’s important for forums and online communities to have clear policies in place to ensure that the conversations remain constructive and on-topic, but the specific details of those policies will vary depending on the community and its goals.
As a language model trained by OpenAI, I don’t have any direct experience with online forums or communities, so I can’t speak to the specific policies that the Julia Discourse forum should consider. However, in general, it’s important for forums to have clear policies in place to ensure that conversations remain constructive and on-topic. This might include policies regarding the use of offensive language, spamming, and other behaviors that can disrupt the community.


To your point, and to use a real example from the forum, consider this post

I was reading the documentation and on this page Frequently Asked Questions of DifferentialEquations.jl, in the Stability and Divergence section, it says:

if you see these instability warnings during a parameter estimation process, this is likely the underlying problem. Simply check sol.retcode != :Success and throw an Inf cost and most optimizers will reject steps in those parameter regimes!

I wonder which optimizers will do that? and how exactly do they “reject the steps”? (e.g.: do they simply skip the current step? or do they randomly initialize a new set of parameters?)

The accepted solution was:

They skip the current step and do some line search thing. I know the Optim.jl algorithms support this. I need to dig around to see which others do it well.

While a ChatGPT solution looks like:

The specific optimizers that will reject steps in parameter regimes where the solution is unstable will depend on the optimization algorithm being used. DifferentialEquations.jl uses a variety of optimization algorithms, so it’s not possible to say which ones will reject steps in unstable regimes without knowing which algorithm is being used.

In general, optimizers will often have a way to handle situations where the optimization process encounters an unstable or invalid solution. This could involve skipping the current step and trying a different set of parameters, or it could involve randomly initializing a new set of parameters and starting the optimization process over from the beginning. The specific details will depend on the optimization algorithm being used.


1 Like