How AI-Powered Fact-Checking Can Help Combat Misinformation

How AI-Powered Fact-Checking Can Help Combat Misinformation

education
Get Paid to Share Your Expertise

Help shape the future of business through market research studies.

See Research Studies

In recent years, misinformation and “fake news” have moved into the harsh spotlight as social media has made it easy to spread false or misleading claims instantly. 

From things like health misinformation that discourages people from vaccinating, resulting in serious real-world impacts, this has caused erosion of trust in many institutions. New methods are necessary for recovering trust and truth in the contemporary information ecology.

AI should be able to facilitate the automation of parts of the fact-checking work and enable mere average internet users to judge whether claims floating around online are credible or not. 

While exact fact-checkers of AI aren’t perfect, particularly when claims demand the service of a trained subject matter expert to adjudicate, they could help combat large-scale basic misinformation.

In this article, we will discuss why human-driven fact-checking hasn’t been enough and the problem of misinformation. The piece then delves into evolving AI solutions for misinformation detection and automated fact-checking with a short of list of startups and initiatives that work in this space. 

It will then discuss the remaining challenges and limitations of AI fact-checkers and will assess their potential impact if these technologies continue to mature.

 

The Rising Tide of Misinformation

With the use of social media and consumption of online news, misinformation — in the literal sense of bouncing information out into the world but not in the proper sense has taken off. To combat this, companies are turning to AI/ML development services to help detect and filter out fake news more effectively. 

Because past inaccurate claims might have only been heard of through word of mouth or reached a small community through limited circulation publications, the interconnectedness of the modern digital media in which we live makes practically any conspiracy theory or even intentionally fabricated ‘fake news’ spread like wildfire.

A major study by MIT researchers published in March 2018 found that false news stories were 70% more likely to be retweeted on Twitter than accurate news, reaching more people faster and deeper than corrections. 

Software bots and human trolls exploit this virality, weaponizing misinformation to sow discord and erode trust for political or financial ends.

Moreover, a Center for Countering Digital Hate (CCDH) 2024 report noted that X — how Elon Musk’s platform had been renamed — failed to crack down on misinformation during the U.S. election cycle. 

Of the 2.2 billion views on 74% of misleading posts, 2.2 billion views on 74% of misleading posts had no visible corrections. That fits in with the challenges of stopping the spread of false information on social media platforms.

The results can be chilling. Health misinformation likely discouraged many from getting COVID-19 vaccines, worsening the pandemic. Politically motivated conspiracy theories have undermined faith in longtime democratic institutions. 

Most Americans report feeling overwhelmed by the amount of mis- and disinformation they encounter online.

Fact-checking organizations provide rigorous verification of claims made by politicians and viral rumors. However, they lack the scale and speed to catch every falsity spreading online. There are simply too many claims needing verification to keep up.

AI offers an opportunity to automate pieces of the fact-checking process and provide real-time signals to users about content credibility when and where misinformation spreads. Before exploring the promise of AI fact-checkers, it is important to understand why human-driven approaches have fallen behind.

 

The Limits of Manual Fact-Checking

Many excellent fact-checking organizations already review claims made by politicians, viral rumors, news headlines, and more. Groups like PolitiFact, FactCheck.org, and Snopes have been reporting on truth and fiction in news and politics for over a decade. The Associated Press, Reuters, and other traditional news organizations also run dedicated fact-checking teams.

These groups follow rigorous processes to assess the accuracy of statements and rate or clearly label those found to be misleading or false. This protects audiences from inadvertently consuming and spreading misinformation from otherwise trusted figures in news, politics, or their social circles.

However, manual fact-checking has inherent limitations in the era of rapid online communications:

  • It lacks scale. No team of human fact-checkers, no matter how skilled, can feasibly keep up with the volume of claims requiring verification, even in a narrow domain like U.S. politics. Emerging tools utilize AI to surface check-worthy factual claims from massive datasets like social media posts for human review. But there are still too many needy eyes.
  • It lacks speed. In the time it takes a fact checker to research a claim and publish their findings, that misinformation may have already spread far and wide. Studies find it takes about six times as long to debunk viral bad information as it does for it to spread in the first place. AI fact-checking aims for real-time assessments.
  • It often lacks reach. The corrections to misinformation rarely go viral in the same way as the falsities themselves, so many who are fed misinformation will never see the corrections. The goal of these AI tools is to provide indicators about misinformation as it is being spread through content or an account.
  • It has blind spots. No team has the specialized expertise to fact-check subjective statements or claims that require deep subject matter knowledge in the niche fields of science, medicine, history, etc. AI tools should also know the limits of their fact-checking capabilities for different information.
  • It shows no memory. Readers are required to scout the internet for fact-checking organizations to educate themselves. AI software could keep reader profiles of consumed content and present the content review details at the time of content consumption for those articles that we know, based on previous clicking history, were dubious or unreliable.

In summary, human-led fact-checking, while accurate, operates on too small a scale, at too slow a speed, and with too little reach and consistency to solve misinformation alone. 

AI solutions aim to complement manual verification efforts with automation, memory, and widespread visibility. Next, we will explore emerging implementations.

 

Emerging AI Approaches to Fact-Checking and Misinformation Detection

AI software will never replace human discernment and subject matter expertise when evaluating certain complex factual claims. 

However, artificial intelligence does enable certain pieces of the fact verification process to scale far beyond human capabilities.

Jeff Dean, Google AI lead, commented in 2021, “AI isn’t as smart as you think – but it could be.”

Since that time, global concern over misinformation has only accelerated research and investment into AI fact-checking solutions. Areas of focus include:

👉 Automated Claim Detection

The first step in verifying claims at scale is utilizing natural language processing, or NLP, to detect meaningful factual statements within large datasets like news articles or social media posts. 

For example, Duke Reporter’s Lab built ClaimBuster, which analyzes political speeches, debates, and news releases to extract meaningful, check-worthy claims.

 

👉 Contextual Credibility Indicators

NewsGuard and Logically put trust and credibility indicators right next to links or search engine results and send users to news sites and articles. 

These take trained ML models that look at things such as website transparency, ownership, history of corrections, use of clickbait, and basic journalism.

 

👉 Automated Fact Extraction and Comparison

Tools like Full Fact and FactMata extract meaningful claims from text and compare them against existing fact checks and reliable information sources to automatically surface mismatches between claims and facts.

These can both expand the reach of existing manual fact checks and detect wholly new unverified claims for human review.

 

👉 Claim Similarity Detection

Startups like AI Seer’s Facticity.AI leverage ML to cluster text and claims by similarity to help human fact-checkers identify duplicate and recurring claims so the effort is not wasted verifying the same falsities repeatedly. This increases throughput.

 

👉 Multimodal Claim Analysis

Advances in AI around computer vision and multimedia allow for the analysis of the authenticity of image, video, and audio content associated with claims. It includes detecting shallow fakes or cheap editing tricks to fool. 

With the advancing capabilities of digital media manipulation, multimodal analysis will come to play an important role in thriving.

 

👉 Bot and Coordinated Inauthentic Account Detection

This can help platforms limit what they show of the environment and the reach and impact of any event in question. Indiana University tools such as Bot Repository use hundreds of signals to identify likely bots. 

ML also helps recognize accounts constituting different clusters and containing patterns of inauthentic coordination with each other.

 

👉 Misinformation Early Warning Systems

Leveraging multiple signals from content claims, account credibility, virality patterns, and more allows the creation of early warning systems to detect emerging misinformation threats and prioritize claims. 

These demonstrate only a sample of the innovative ways AI and ML are beginning to make progress on different parts of the complex fact verification and misinformation detection pipeline. Next, we will touch on some notable initiatives before discussing their limitations.

 

👉 Initiatives Advancing AI Fact-Checking

Many promising initiatives from startups, academics, and big tech companies demonstrate the accelerating innovation around applying AI to misinformation and fact-checking. Some examples include:

  • FactMata. A startup utilizing NLP and large databases to automatically contextualize and verify claims in news articles in real time. Won Duke Reporters Lab competition for automated fact-checking.
  • Logically. This startup builds AI tools for various misinformation use cases and also employs human expert journalists in a first-of-its-kind blended fact-checking service.
  • Google Fact Check Explorer. It improves search engine fact-checking using Knowledge Graph entity understanding and public claims, fact checks, and article relationships.
  • Facebook Third Party Fact Checking. It works with vetted external fact-checking partners to use their expertise and findings in tackling misinformation on one of the world’s biggest social media platforms.
  • Wolfram Alpha. Computes expert-level answers on the fly from structured data sources to provide factual clarity on math, science, weather, nutrition, and other niche topics.
  • First Draft News. Global non-profit focused on research and community resources to advance truth in digital news and combat mis- and disinformation through technology, skills building, and coordination.

These are signs of the accelerating innovation from both startups and established platforms that are making progress in using AI and automation to solve the massive problem of keeping truth and facts reliable and available online. 

On the contrary, some critical challenges and limitations also need to be discussed before AI fact-checking can achieve all its potential.

 

Challenges and Limitations of Current AI Fact Checkers

While great progress is being made, AI fact-checking and misinformation detection remain extremely challenging. 

Even the most advanced systems today have critical weaknesses limiting their real-world performance and potential impact. Some of the biggest challenges include:

1️⃣ Accuracy Issues around Broad Factual Claims

Most current systems do well only with very narrow factual statements that can be easily verified through structured data. Evaluating more complex claims requires deeper semantic understanding AI still often lacks compared to human discernment.

 

2️⃣ Lack of Subject Matter Expertise

Just as with niche or highly technical subjects such as medicine, economics, science, etc., it will take special expertise and background knowledge that our widest AI models today do not have to verify claims. 

Typically, fact-checkers either take the time to consult Ph.D.s in various areas, such as epidemiology or political science, or they use a widely circulated index, like the ones we mentioned before.

 

3️⃣ Difficulty Detecting New “True” Claims

Most automated systems are tuned to compare claims against existing verified facts and fact checks to identify mismatches reflecting falsities. Detecting entirely novel factual claims and surfacing previously unpublished truths requires more advanced reasoning.

 

4️⃣ Vulnerability to Language Ambiguity

Ambiguity is common in things like sarcasm, metaphors, and missing context in samples, e.g., no context in a snippet or simply a lack of specificity, which AI cannot read from natural language, and humans naturally fill in the gaps.

 

5️⃣ Manipulation of Multimodal Content

As fact-checking continues to move beyond text to images, video, and audio, more and more true vulnerabilities emerge as increasingly realistic-generated media appear. At the same time, detection abilities are improving.

 

6️⃣ Gaming Exploits Fooling AI

This induced misinformation will exploit the pattern to evade certain detections by automated systems that aim to both nullify and confirm signals at once. Maintaining resilience requires ongoing tuning.

 

7️⃣ Blind Trust in Model Outputs

There is always a risk of over-reliance on AI assessments when the unique context and subjectivity of certain claims merit specialized human discernment best recognized in a combined system.

In summary, it means that narrow fact checks will continue to be automated; however, many other kinds of misinformation and manipulation online have not been an AI challenge problem for years.

Hybrids of human and AI approaches are the best way to get scale, speed, and accuracy.

 

The Future Impact of AI on Truth and Facts Online

AI is not yet developed enough to fully check facts, but natural language processing, semantic reasoning, and multimedia analysis are slowly evolving to the point that AI will have a big role-playing role in fighting basic misinformation online.

Ongoing research and development initiatives across academia, startups, and tech giants focused specifically on misinformation ensure progress in applying AI to this problem vertical will continue. 

Especially as other language-focused AI use cases create technology spillovers.

Hybrid approaches have the potential to overcome most of the weaknesses of each of the other two (human discernment is limited by scale and memory, while AI scale and memory are too narrowly defined to encompass enough of the complex problem space). 

For example, this could enable the rapid surfacing of thousands of check-worthy factual statements to be evaluated by just a team of human-specialized fact-checkers on the most salient. 

AI models can be able to train human subject matter experts so that they can learn facts and claims in niche focus areas.

As AI capabilities mature in context and reasoning, the accuracy of AI fact-checking is likely to increase over time, leading to the ability to fact-check on more complex claims. 

We could have future AI assistants on our personal devices or messaging apps that could be considered reliable first-line information validators and guard against manipulation.

Although societies need to overcome concerns about media ethics and an equal desire to fight misinformation, they should also avoid creating situations that rely too heavily on automatic judgment. 

As AI fact-checking evolves, tech companies will have to be regulated and practice self-governance. The risks associated with these facts are much more than just the facts. 

With a careful hand, AI-powered fact-checking can certainly be a necessary tool in correcting truth and trust in our contemporary information ecosystem.

Gaurav Sharma
About the Author
Gaurav Sharma

Gaurav Sharma is the founder and CEO of Attrock, a results-driven digital marketing company. Grew an agency from 5-figure to 7-figure revenue in just two years | 10X leads | 2.8X conversions | 300K organic monthly traffic. He also contributes to top publications like HuffPost, Adweek, Business 2 Community, TechCrunch, and more.

Similar Articles

Show more