#13 – How to Avoid Losing Your Credibility to Chatbots and Artificial Intelligence-Generated Content

How to avoid losing your credibility to an AI content generation chatbot
26Jun '23

#13 – How to Avoid Losing Your Credibility to Chatbots and Artificial Intelligence-Generated Content

2023 is the year Artificial Intelligence-enabled chatbots reached new levels in their ability to automatically generate sophisticated content, including computer code, academic articles, and even passing the SAT college entrance examination. The most widely celebrated chatbot, ChatGPT, seemed to be the best of this next generation of AI by patterning natural speech.  

With the release of ChatGPT, a wave of anxiety swept through the ranks of knowledge workers who suddenly realized they could be replaced by AI, just as their blue-collar colleagues in manufacturing were displaced by automation.  

Will ChatGPT and other AI chatbots live up to the hype?

Here Comes The Judge 

AI-enabled chatbots can indeed improve productivity and efficiency, but even the most advanced ones, including ChatGPT, have significant limitations.

In one of the best examples of how ChatGPT can help but can be wildly inaccurate, an attorney recently used ChatGPT to create a court filing full of nonexistent cases. 

“It seemed clear when we didn’t recognize any of the cases in their opposition brief that something was amiss,” Avianca’s lawyer Bart Banino, of Condon & Forsyth, told CBS MoneyWatch. “We figured it was some sort of chatbot of some kind.” 

https://www.cbsnews.com/news/lawyer-chatgpt-court-filing-avianca/ 

The presiding judge has yet to rule on how he plans to sanction the attorney and his employer, but the damage to the law firm’s reputation may be worse than any penalty that might be imposed.  

A Word About ChatGPT 

In our experience, ChatGPT is helpful in structuring content in a grammatically proper way, however, it has some glaring drawbacks. It is heavily dependent on content already existing on the Internet. If you’re marketing an innovative solution, ChatGPT won’t be able to associate the benefits of your solution with larger trends, which is critical when selling something that fixes a problem the market doesn’t yet know it has (e.g., reducing oxygen in tank vapors to eliminate tank flaring). Context and relevance are everything, and if something does not already exist in the virtual world, then chances are ChatGPT doesn’t know the real-world problem to be solved. 

Worse, ChatGPT seems to have a very high opinion of its own work and seems to exhibit confirmation bias. Like the example of the chatbot making up legal cases, we have found it will simply make up facts, events, and statistics to support its original thesis and requires several challenges to get it to admit that it got it wrong. I discovered this is especially true when researching and writing on a controversial topic. Maybe AI really is more human than we think. 

Additionally, the quality of the content ChatGPT makes is dependent on the quality of the questions it is being asked. Previous AI content generators we’ve evaluated put out content heavily dosed with adjectives, flowery language, and stilted prose. Although ChatGPT writes better prose than other AI chatbots, it does not necessarily mean it’s good content. To get something truly useful out of ChatGPT, it still takes a knowledgeable person to ask the right questions. Doesn’t that defeat the purpose of AI? 

What else is wrong with content generated by ChatGPT? I decided to ask the chatbot itself and posed this question to it: Describe the limitations of ChatGPT and how to mitigate them. 

Here is the reply from ChatGPT: 

ChatGPT, like any other AI language model, has several limitations that affect its performance and capabilities. Here are some of the key limitations and potential ways to mitigate them: 

  1. Lack of real-world knowledge: ChatGPT’s training data only goes up until September 2021, which means it lacks awareness of events, advancements, or discoveries that have occurred after that time. To mitigate this limitation, incorporating updated training data would be necessary to improve the model’s knowledge of current events and developments. 
  1. Generating inaccurate or misleading information: ChatGPT generates responses based on patterns it has learned from the training data, but it does not have the ability to verify the accuracy of the information it provides. It can sometimes produce responses that are factually incorrect or misleading. One way to mitigate this is by incorporating fact-checking mechanisms that can verify and validate the information generated by the model. 
  1. Sensitivity to input phrasing: ChatGPT can be sensitive to the specific phrasing or wording of the input. Small changes in the way a question is asked may result in different or inconsistent responses. This limitation can be mitigated by providing more context and clarifications in the input, making the model’s understanding more robust. 
  1. Tendency to be verbose or overuse certain phrases: ChatGPT can sometimes generate excessively long or redundant responses. It may also overuse certain phrases or provide explanations that are unnecessary. To mitigate this limitation, the model can be fine-tuned or guided to prioritize concise and coherent responses. Techniques like beam search or length normalization can also be employed to control response length. 
  1. Lack of commonsense reasoning and context understanding: While ChatGPT has the ability to generate coherent responses, it may struggle with understanding context and applying commonsense reasoning. It can sometimes provide answers that technically make sense but are not logically or contextually appropriate. Improving the model’s understanding of context and incorporating external knowledge bases could help mitigate this limitation. 
  1. Bias and ethical concerns: Language models like ChatGPT can unintentionally reflect biases present in the training data, which can lead to biased or unfair responses. Mitigating bias requires careful curation of training data and ongoing monitoring of the model’s output for potential biases. Regularly updating and diversifying the training data can help reduce bias and promote fairness. 
  1. Inability to handle ambiguous queries: ChatGPT can struggle with ambiguous or vague queries, often requiring specific and well-defined input to provide accurate responses. Mitigating this limitation may involve improving the model’s ability to ask clarifying questions when faced with ambiguity or developing techniques to handle uncertainty and ambiguity more effectively. 

It’s important to note that while there are potential mitigation strategies, fully overcoming these limitations is an ongoing research challenge. Continued research, development, and feedback from users are essential to address these limitations and improve the performance and capabilities of language models like ChatGPT. 

I could not agree more with ChatGPT’s self-assessment!  

AI Content Generation IRL (In Real Life) 

If you made it this far in the article, then you might think we are anti-technology, but nothing could be further from the truth. We are not Luddites, resistant to adopting new technologies and processes. On the contrary, we are quick to adopt tools that help us boost our own productivity through better collaboration, automating routine tasks, and gaining new insights into buyer behavior. Our opinions are based on our real-world experiences with the virtual world of AI content generators.  

In one situation, a client engaged us to help them generate blog posts and other industry-specific content to boost their Thought Leadership credibility and organic SEO rankings. They said gave us access to a library of approximately 100 blog articles previously written by the outgoing agency. We had high hopes that we could leverage the existing work by supplementing it with our industry knowledge and some light editing.  

We were dead wrong. Every blog was awkwardly written, overly prosaic, plagued with factual inaccuracies, and in some cases the headings had absolutely nothing to do with the content they were supposed to introduce. When we asked where the content came from, the client replied, “I think they used an AI content generator.” Case closed.  

Sleeping With One Eye Open 

White-collar workers can breathe a (temporary) sigh of relief. AI is not ready to replace them just yet, but humans are working diligently on the next generation of AI chatbots. OpenAI, the company that made ChatGPT, is aware of the chatbot’s limitations and already working on fixing it’s penchant for “hallucinations.”  

Until then will the next generation of knowledge workers be subject matter experts, skilled in asking AI chatbots the right questions and capable of checking its work? 

All this begs the question, why not just use a knowledgeable human in the first place?  

Let’s Talk! Contact us today to learn more about how Content Marketing can help you tell a story that matters and achieve your marketing objectives.

About Prism Group

Prism Group is a full-spectrum, multi-talented B2B agency focused on Energy Tech, Climate Tech, Oil & Gas, Oilfield Services and Media markets. Our work is telling stories that matter, because our clients are working to change the world for the better. What makes us different is that we put into action the strategies we recommend – we get it done, so you can get results and increase the value of your business.

CONTACT
James Constas
President
Prism Group
JConstas@1PrismGroup.com

Prism Group