I just had conversation with ChatGPT that ended with it making product recommendations. I was asking about documentation, so it suggested Confluence and Notion.
Those are good options and do match what I was asking about. But, if ChatGPT is going to make product recommendations, then companies will try to game this like they do for search engines.
Before Google, search engines ranked pages on the count of keywords in the document. That was easy to game by just stuffing the keywords. Google solved this by ranking the search results by backlinks. They reasoned that pages with the most incoming links must be better because a backlink was like a vote of confidence. Unfortunately, that was also easy to game. The last 20+ years has been a struggle between Google trying to rank results correctly and SEO experts trying to rank higher than they deserve. There is an enormous amount of money fueling both sides.
We are just starting with chatbots. I don’t know why it specifically chose Confluence and Notion, but since it’s a probability engine, more text in the training set would be a factor. It understands sentiment, so it can discern good reviews from bad reviews and knows that recommendations should offer good options.
But does it understand authority, bias, expertise, etc. If I stuff the internet with 1000’s of good reviews of my product (even if they would never be read by a human), would ChatGPT then start recommending me? The ChatBot equivalents of SEO experts will be trying to figure this out.
Since ChatGPT is currently stuck in January 2022 (about a year ago), you might not be able influence its suggestions for a year or so. The learning cycle is currently very long. But, that also means that it probably can’t recover from being gamed quickly either. Hopefully, models can be developed that resist gaming.