On StackOverflow, it’s common for newbie questions to get downvoted, closed, and draw comments making fun of the questioner. To be fair, many of these questions are not good and show that the asker didn’t follow the guidelines. Even so, StackOverflow is often hostile to new programmers. To be honest, I’m surprised that ChatGPT didn’t somehow learn this bad behavior.
ChatGPT answers are sometimes totally wrong, and they will be even more wrong for the way newbies ask questions. If they weren’t, StackOverflow wouldn’t have had to ban answers generated from chatbots. But, I still think ChatGPT a better experience because it’s fast and synchronous. This allows it to be iterative. Of course, this doesn’t help if the suggested code can’t be used.
If I were StackOverflow, I might consider how LLMs could be used to help newbies ask a better question that gets answered by humans if the LLM can’t answer. Let the user iterate privately, and then have the LLM propose a question based on a system prompt that understands StackOverflow guidelines. Normally, I’d expect the LLM to be able to answer at this point, but I just ran into a problem yesterday where it kept hallucinating an API that was slightly wrong. This kind of thing happens often in ChatGPT for me. In a lot of cases, I could guess the real API or search for it in the docs, but a newer programmer might not be able to do that.