Do LLMs (chatGPT) help in reducing basic questions happening in discourse (slack)?

Not sure if this has been asked before, if it was please direct me to that thread.

I just wonder, if today in 2025 do you there are less questions from students or other newcomers to Julia appearing in Discourse or Slack?

I also wonder do you think LLMs decrease the number of low effort posts or increase them instead?

Not sure if I seen any info about this for any sort forums technical or not. Stackoverflow is certainly must be a better place now due to reduced number of posts (but also maybe there is now more LLM junk)

2023-07-14:
“Stack Overflow from January 2016 to June 2023.” : 16% decrease

"In this work, we investigate how the release of ChatGPT changed human-generated open data on the web by analyzing the activity on Stack Overflow, the leading online Q&A platform for computer programming. We find that relative to its Russian and Chinese counterparts, where access to ChatGPT is limited, and to similar forums for mathematics, where ChatGPT is less capable, activity on Stack Overflow significantly decreased. A difference-in-differences model estimates a 16% decrease in weekly posts on Stack Overflow. "

1 Like

Here is a snapshot of our metrics since November 2022 (weekly posts) as compared to StackOverflow. (Note the nonzero axes, these are just the default plots out of the sites, respectively)

8 Likes

I think one major difference between our forums and StackOverflow is that our forums are pretty welcoming places, that encourage discussion, help, and clarification.

StackOverflow is a useful library of answered questions, but because of the design choices they made in order to turn it into a library, the experience of actually going there to ask a question is dreadful. It’s intimidating, unfriendly and often demeaning to ask a question there, so I’m not surprised that people end up asking LLMs which are directly trained on StackOverflow data rather than trying to interact with the actual StackOverflow community.

I think everyone who has tried to get help there has at least once been hit with the classic “Closed as duplicate” and then given a link to an irrelevant question that doesn’t actually help.

16 Likes

I personally feel, we do not have really many lower-effort-posts. Usually we can refer to this one meta-post, the question improves and overall we have a really nice human interaction in the end.
Usually just people that post for the first time might not yet be so familliar how we can help them best. For them I think human interaction is important, to help them solve their problem and improve a bit to actually present their problem.
I second Masons statement that I hope our forum is a pretty welcoming place, but feel that the human interaction and current workflows contribute to that. An LLM might have the opposite effect?

I prefer such a modus for questions over any AI help, since for me this forum, slack, Zulip,… is about the human interaction and to help others.

5 Likes

I think it is quite evident that technical forums lately are loosing their significance as places where people ask for help in solving different problems and remain to be placed for more general communucation, learning about news and sharing ideas.

There’ve been some smaller second-order effects in behaviors here that I’ve been casually tracking. These are all very much “shooting-from-the-hip” sorts of attempts at summarizing the anecdata I’ve seen:

  • Spammers are using LLMs to bury their spam in ham. So far the ham is very dry and overcooked and obvious to spot, but I expect this to become harder.
  • I suspect some new-to-the-forum real humans are training themselves in online interactions on chatbots. And they forget they’re talking to real humans here. This leads to very low-effort posts that are effectively like “chatbot prompts” to the community. This is quite rare, but interesting to me.
  • I think LLMs have been a boon to our non-native English language population, but this is harder for me to evaluate, looking from the outside.
7 Likes

Anyone has any opinions on the slack? It feels to me anecdotally that help desk is now has leveled up in the question difficulty. I kinda remember in the past I could classify questions into newuser/experienced, but now all the questions aren’t easy to answer. It feels like LLM swallowed all these entry level questions.

+1 data point. Chinese native speaker here.
I’ll talk to ChatGPT first when I don’t know how to fix an error or how to do something in julia now days. Even if GPT doesn’t solve my problem.
I see talk to GPT as some kind of Rubber duck debugging.

Give a real world example:
I’m trying to translate the CORE-MATH math library into pure julia.
I’m having problems with the output of the C implementation not matching the julia implementation.
(1) Perhaps this is a good time to clean up my code and ask for help.

But I think I should try to narrow it down as much as possible before asking.
So I continue debugging, narrowing the problem down to some bit-shifting operations in C.
(2) If GPT doesn’t exist, I’ll give 3-5 lines of C code and the corresponding julia code and ask why their output doesn’t match.

But I chose to ask GPT first, and GPT indicated that assigning UInt64 to Int32 in C might be UB.
After more conversations, I ended up implementing a similar operation in julia.
(My implementation passes all the tests, so maybe GPT has helped me with this.
(3) But it still seems like a good question to ask, and maybe I should post and discuss it with people to see if operations in C are really UB, and if there’s room for improvement in julia’s implementation.)

GPT doesn’t always give useful answers, but it can usually give some direction that I hadn’t considered and show me things I didn’t know I didn’t know.


Interestingly, because of the Slack black hole, I prefer to ask simple questions on slack, such as “How do I fix this error?”

4 Likes

Before getting to LLMs, questions about Julia performance can be reduced by at least 10% if you have a pinned thread to ask posters to always check if they’re benchmarking with non-const global variables. :grinning:

3 Likes