The biggest tell that you are chatting with an AI and not a human is that they talk too much.
Yesterday, to start off a session with ChatGPT, I asked a yes/no question and it responded with a “Yes” and another 325 more words. No person would do this.
Another version of this same behavior is when you ask an LLM a vague question—it answers it! No questions. Just an answer. Again, people don’t usually behave this way.
It’s odd because in the online forums that were used to train these models, vague questions are often called out. It’s nice that the LLM isn’t a jerk, but asking a clarifying question is basic “intelligent” behavior and they haven’t mastered it yet.