AI is swiftly growing in use throughout pretty much every industry. Sometimes the tools are specialized, like SEO-focused analysis tools that offer suggestions on how to maximize your Google search ranking. Other times, they're more generalized, like ChatGPT, used more as toys than as serious tools. Either way, there are a lot of questions you may have about the validity of the information they give you.
Can AI be used for fact-checking? Some people think so, while others think not. Let's look at the issue from a few different angles and see what you need to do as a business or as a freelance writer looking to use AI as part of your toolset.
No. At least, not the common tools currently in use, like Jasper or ChatGPT.
Current AI content generators were trained on large pools of writing, and as a consequence, they can generate very good-sounding content. However, they have a tenuous grasp of fact at best. There are a huge array of examples of asking the AI the reason why some fact or another is true, and it giving nonsensical answers.
It's one thing to know a fact ahead of time, ask the AI about the fact, and get a verifiably incorrect answer. It's another to ask it to generate content out of whole cloth and hope it generates something accurate. You generally need a fact-checker and an AI editor or AI editing service to go over the content an AI writer generates to make sure it's not making claims you don't want to support.
Here's what ChatGPT says right on their homepage:
"ChatGPT sometimes writes plausible-sounding but incorrect or nonsensical answers. Fixing this issue is challenging, as: (1) during RL training, there's currently no source of truth; (2) training the model to be more cautious causes it to decline questions that it can answer correctly; and (3) supervised training misleads the model because the ideal answer depends on what the model knows, rather than what the human demonstrator knows."
AI doesn't have fact-checking built into it. It can say factually true things, of course. It can also say things that aren't true but assume they are.
The key thing to remember is that there's no intelligence behind AI, despite the name. It's all math. All language-model AIs do is perform a vast set of calculations to predict the most likely words that will follow previous words, with X amount of word look-back to make it more coherent as a whole. The AI doesn't think anything or know anything; it just puts words together in an order based on what previous real people have written and have been fed into the machine.
Probably, eventually, maybe.
There are currently a variety of companies pushing to make fact-checking AI systems for use in trust analysis, journalism, verifiability, and other areas. Here's an example of one.
While models like ChatGPT are designed and trained with natural language as the goal and reinforcement towards things that sound good with no inherent reliance on fact, other AI models can be trained with other forms of reinforcement. If one were to create an AI system that was reinforced with factual accuracy as a goal, it could potentially be used for fact-checking.
There are a few significant limitations to this, however.
First of all, it relies on the training data and reinforcement to be correct. If you trained the AI to believe that the sky is pink polka-dotted, it would "know" that as fact and fact-check anything saying the sky is blue as wrong. The AI doesn't have eyes and can't observe the world, identify facts about the world, or use that information on its own. It can only "know" what it is told is correct.
Secondly, the AI can only know things that the people training it know. If no one on the development team is a medical expert, the AI is unlikely to be able to verify medical facts at all. Sure, it could be trained on PubMed, but there's a lot of junk on PubMed, and knowing how to filter the good papers from the bad is a key skill an AI won't have.
Third, there's a lot about the world that people may believe is true, when it isn't. Do you "know" that a bull will get angry and charge a red flag, or that people only use 10% of their brains, or that touching a lost baby bird means the mother will reject it? Common misconceptions can be programmed into an AI because people don't know what they don't know.
Finally, there's a lot about reality that isn't objective. A very simple example is the illustration of point of view; with a symbol painted on the ground, someone standing on one side of it will see a 6, while someone on the other side sees a 9. Neither one is necessarily right or wrong, and an AI can't tell you which is true.
These are the kinds of hurdles that any fact-checking AI will have to overcome if it wants to be trusted itself.
None of the problems above have actually stopped anyone from making AI-based fact-checking systems. So, how do those systems work, and how do they solve those problems?
Essentially, there are three kinds of AI fact-checking systems.
The first strives to be trained on issues and can perform research throughout its database of training material to identify inconsistencies and suggest that something may or may not be correct. Claims that are verifiably incorrect can be spotted, claims that are questionable may be flagged, and claims that are verifiably correct can be given a pass. Anything that falls outside the paradigm, either as a claim that can't be identified as a claim or as a claim that is outside of the AI's area of expertise, will be flagged as unavailable for checking.
There are already a bunch of systems like this in development and in use in small scales, but nothing large enough to be a counterpart to something like GPT.
The second is more of an analysis of the world surrounding the claim. The AI doesn't attempt to make the judgment over whether or not the claim is true; rather, it tries to analyze the source of the claim and identify whether or not it's likely to be trustworthy. This is kind of like what Google already does with their E-A-T analysis, but more AI-driven.
The third is just to be wrong all the time. Plenty of startups are trying to layer various levels of fluff on top of a core system like GPT and claim it can do fact-checking, looking to sell a bunch of licenses before they fold under the pressure of being frauds.
This is all for written content, by the way. There are different AI systems entirely that are trained on recognizing other AI-generated content and AI systems created to identify when an image or video has been edited or manipulated. These have their own benefits and their own challenges, but while they're adjacent to fact-checking, they aren't quite the same, so I've mostly left them out of the discussion.
Very likely, yes.
AI-generated content is going to be very prevalent in the coming years, and that presents two major issues.
The first is the above-mentioned lack of factual verification in common AI systems like GPT. While it's possible that these systems will be trained to be more factual once they've gotten the realistic presentation down, we aren't there yet.
The second and more insidious problem is groups deliberately pushing misinformation. We're seeing this already, from election falsehoods to pandemic misinformation to whatever other agenda anyone with access to a content-generating AI wants to push. An AI can be trained to say whatever its handlers want it to say, and that can include completely incorrect information pushed for an agenda.
There is very likely going to be a crisis of trust regarding content in the near future, more so than there already is with the way viral misinformation can spread on platforms like Facebook and Twitter. It's a problem that will need to be addressed, and there's a significant risk that the people in charge of addressing it aren't moving fast enough.
Of course, it's much easier to lie than it is to call out a lie, so this is a difficult arms race to be involved in. I don't envy the AI designers on the front lines.
Not anytime soon.
The truth of the matter is that AI needs validation and verification to be trained properly, and only humans can do that. AI isn't self-learning or self-sustaining. It is, again, all math at its core.
Training an AI to even recognize the claims being made is an extremely difficult task. Training it to identify the claims being made, validate whether or not they're true, and present sources to prove it is another thing entirely.
We aren't there yet. We probably won't be there for years, though a lot of companies are going to make bold claims in the near future.
There will be a lot of content created by AI and published everywhere on the web in the very near future. There already is a lot more of it than many of us probably want to admit. Understanding its limitations will be a big part of using it effectively if that's what you want to do with the tool.
If you're worried about fact-checking, the best thing you can do is hire subject matter experts you can trust.
There's a reason that sites like WebMD hire doctors to fact-check, why Twitter has embedded fact-checking from verifiable sources, and why sites like Snopes or Politifact exist.
People are better at recognizing the claims being made, identifying the holes in those claims, and either presenting reasons why they aren't valid, sources to prove they are, or just fixing them before publication.
If you're a business producing content, whether you're hiring freelancers to write or using an AI to generate that content, a fact-checker should be part of your editorial process. For the moment, only a real human, and one who is an expert in the subject in the first place, can appropriately handle fact-checking.
More than that, you need fact-checking that holds weight. Either you're staking your business reputation on the facts you present, or you have a third-party individual who is staking their reputation on it. Of course, then you need to validate that those people know what they're talking about and that their reputation is worth anything to them in the first place, or even that they're real. Still, it's a starting point for human fact-checking.
If you're a content creator and you need fact-checking, hiring an editor to handle it is generally your best bet. Even if you use an AI tool to validate the facts in your piece, you still need someone to go over the AI results and validate those because AI can't necessarily be trusted yet.
It's a very complicated problem. I recommend avoiding it entirely until the AI ecosystem gets a lot better than it is now and a few different major problems are shaken out and fixed, if they even can be addressed with current AI models.
Until then, why not check out my job board? I have a ton of resources available for hiring writers of all different kinds, as well as tips for writers to find clients willing to pay them what they're worth. On top of that, I'm working to make my job board the best place for writers, with clients and writers both frequenting the lists. I can't do it without you, though, so check it out!
Additionally, if you ever have any comments, questions, or concerns, you're always more than free to leave those in the comments section down below. I'd love to hear what you think, and would be more than happy to answer any of your potential questions.
Leave a Reply