LLM chatbots are bad at some things. Some of this is intentional. For example, we don’t want chatbots to generate hate speech. But, some things are definitely not intentional, like when they make stuff up. Chatbots also fail at writing non-generic text. It’s amazing that they can write coherent text at all, but they can’t compete with good writers.
To get around some of these limitations, we have invented a field called “prompt engineering”, which use convoluted requests to get the chatbot to do something it doesn’t do well (by design or not). For example, LLM hackers have created DAN prompts that jailbreak the AI out of its own safety net. We have also seen the leaked prompts that the AI companies use to set up the safety net in the first place. Outside of safety features, prompt engineers have also found clever ways of trying to get the LLM to question its own fact assertions to make it less likely that it will hallucinate.
Based on the success of these prompts, it looks like a new field is emerging. We’re starting to see job openings for prompt engineers. YouTube keeps recommending that I watch prompt hacking videos. Despite that, I don’t think that this will actually be a thing.
All of the incentives are there for chatbot makers to just make chatbots better with simple prompts. If we think chatbots are going to approach human-level intelligence, then we’ll need prompt engineers as much as we need them now for humans, which is “not at all.”
Prompt engineering is not only a dead end, it’s a security hole.