No, I think the biggest problem is AI content which is not marked, and which is passed off as the answer of the poster themselves.
Wrong answers from ChatGPT tend to be more misleading than wrong answers from humans, because the ChatGPT answers often are very well-written in a convincing style, with clean code, appearing well-researched, and going in-depth, but wrong or misleading below the surface level. It is much easier to spot poor answers from human posters.
In fact, I would compare ChatGPT’s answers to those from a well-informed human who deliberately tries to mislead in subtle ways. This is much worse than a simply wrong human answer.