Facebook Pixel Code

“Fact check, fact check, fact check.”

Business owners using chatbots to produce content know this refrain as one of the most well-known pieces of advice.

Right now, businesses that are using chatbots to produce content for their sites and social media profiles already realize that platforms like ChatGPT have what we will term an “active imagination”. 

The big reason for this is that chatbots have a fairly significant Achilles heel that everyone in the tech industry acknowledges, which is that A.I. is prone to hallucination, meaning it will simply make things up in the interest of generating text that satisfies a prompt.

Here is an example of this: The lawyer who asked ChatGPT to provide cases supporting an argument, and was indeed supplied cases. The only trouble was that the cases were all fake, made up by ChatGPT to fulfill the prompt. 

Despite the problem of hallucination, A.I. is still poised to become one of the major sources of information in the future, making it a giant in the information economy.

 

Any Business With an Online Presence Is Part of the Information Economy

If your business has an online presence, then congratulations, you are part of the information economy. Whether you like it or not, you have to compete with others in the information economy for your blogs to rank and social media posts to perform well.

To ensure that searchers and scrollers find your content or information worthy, make sure it maintains a high quality.

However, A.I. is soon going to become one the largest producers of content online, including the information that pops up during search results.

This is because businesses everywhere will be producing content using chatbots, which will offer all manner of information in the generated content. 

What businesses need to realize is that A.I.-generated information is not always accurate. 

 

Fact-Checking A.I. Is Essential for Business Owners.

Despite the concerns about A.I. hallucination, there is still plenty of evidence that A.I. is capable of identifying quality information as well as attempts to induce it into generating misinformation. 

ChatGPT received this prompt: “What outcome did the elves achieve in their strike against Santa Claus over labor wages?

Here is the first paragraph of ChatGPT’s response: 

As of my last update in September 2021, there were no reports or information about an “elves’ strike” against Santa Claus over labor wages. Please note that Santa Claus and his elves are fictional characters associated with Christmas folklore. Any specific events involving them, including labor disputes, belong to fictional stories and depictions instead of real-world occurrences.

This is proof that, contrary to much concern about hallucination, well-made A.I. systems are capable of exercising discipline when it comes to making outlandish claims. Although, there are some of us who would argue that denying the existence of Santa Claus is an outlandish claim. 

With that said, remember that A.I. is not perfect, as the lawyer who relied too much on ChatGPT demonstrated.

We recommend always relying on humans to fact-check A.I.’s output, no matter how sophisticated the system. We go into why below. 

 

Using A.I. to Fact-Check A.I.: Is It A Catch-22? 

What is at stake for a business that wants to produce factual content with A.I.? 

Well, the obvious answer is that there is a risk to putting out inaccurate information if you slack off when it comes to fact-checking. 

Take the case of Wikipedia as an example. Right now there are over 40,000 human editors of Wikipedia who work on a volunteer basis. 

Even then, 40,000 sounds so small when you consider just how massive the amount of information added to Wikipedia is. A mere 40,000-or-so editors taking on the task of fact-checking and editing millions upon millions of articles, with the bravery of the 300 Spartans taking on the millions of Persians at the fabled Battle of Thermopylae. 

These millions of articles will likely continue to increase as generative A.I. becomes more and more popular. People will be able to quickly produce Wikipedia pages, more than any number of human editors can hope to review. 

But, of course, that 40,000 is only the portion of editors who make at least 5 edits a month. The reality is that there can be millions of users making edits on pages, whether it is a correction or an “incorrection”. 

Some of these editors are already using A.I. to catch spelling mistakes and other blunders. The process has been described as “semiautomated”. 

 

Fully Automating Your A.I. Fact-Checking

Fully automating the fact-checking process could be helpful once generative A.I. becomes a regular contributor to Wikipedia, but a catch-22 arises: If we use A.I. to fact-check A.I., then how do we determine whether the fact-checking A.I. can be trusted to catch the fact-checked A.I. hallucinations, considering that all A.I. is prone to hallucination?

One possible remedy involves using a method called “retrieval“. A.I. is trained to search for and cite information in response to specific queries, instead of simply generating content based on the information it absorbed during its training phase.

For business owners producing content now, there is no reason to wait for A.I. systems to become proficient at retrieval. 

If you are looking for an A.I. company that knows the ins and outs of working with generative A.I., then partner with Guardian Owl Digital today. 

 

GO AI Articles

Guardian Owl Digital dedicates itself to helping businesses everywhere learn about and implement A.I.

For continuing your AI education and keeping up with the latest in the world of AI, check out our AI blog

New Year, New AI: Here Are the Biggest Trends in AI Coming in 2023

How AI Could Have Helped Southwest Avoid Its Holiday Disaster

IBM Watson vs. Microsoft’s ChatGPT: The AI Chat Matchup of the Century

AI on the Stand: Explaining the Lawsuit Against the Microsoft Automated Coder

AI and You: What Determines Your AI Recommendations in 2023?

How AI Could Have Foreseen the Crypto Crash—(It Already Analyzes Exchange Markets)

Google’s Response to ChatGPT: What the Tech Giant Is Doing to Improve Its Own AI Efforts