Skip to main content
Best News Website or Mobile Service
WAN-IFRA Digital Media Awards Worldwide 2022
Best News Website or Mobile Service
Digital Media Awards Worldwide 2022
Hamburger Menu
Advertisement
Advertisement

Commentary

Commentary: AI isn’t ready to dispel fake news. In fact, it might make things worse

Automated real-time fact-checking is the future, but current AI still has a long way to go towards achieving it, say Joel Skadiang and Jaga Naidu from independent fact-checking platform Black Dot Research.

Commentary: AI isn’t ready to dispel fake news. In fact, it might make things worse

While the idea of automated fact-checking powered by large language models (LLMs) may sound promising, the reality is that we are still a long way from achieving it. (Photo: iStock/Arkadiusz Wargula)

SINGAPORE: In an era of rapidly advancing technology, large language models (LLMs) have emerged as powerful tools capable of generating human-like text and transforming various industries

LLMs, a form of artificial intelligence (AI) technology, are computerised language models that have been pre-trained on large textual datasets. With these large databases at their disposal, these language models are capable of generating detailed text based on simple user prompts, capturing much of the syntax and semantics of human language. 

Although only in their infancy a few years ago, LLMs have progressed by leaps and bounds to become almost omnipresent in present times

Recent developments have created much excitement about the apparently limitless potential of LLMs and generative AI, thanks to user-friendly versions such as ChatGPT that have become exceedingly accessible to the general public. 

Some observers have raised the possibility of such technology being deployed to combat the rise of disinformation and misinformation, more commonly known as fake news. This could come in the form of assistance to the fact-checking process, which currently still involves significant human intervention.

However, there are also growing concerns about the ethical challenges that may arise from automating the fact-checking process, and indeed the dangers of LLMs in being used to generate such fake news themselves.

As such, there remains a delicate balance between harnessing the potential of LLMs to assist with fact-checking while acknowledging the limitations that still impede a fully automated implementation.

THE ERA OF AUTOMATED FACT-CHECKING?

While the idea of automated fact-checking powered by LLMs may sound promising, the reality is that we are still a long way from achieving it. 

At a panel discussion on navigating the information environment at the Riga StratCom Dialogue 2023, hosted by the NATO Strategic Communications Centre of Excellence, proponents of AI-based fact-checking estimated that it would take between three to five years for fully automated fact-checking to go from being a pipe dream to something that could potentially be implemented. 

They posited that fact-checking is a complex process that requires human judgment, contextual understanding, and domain expertise, something that is simply not possible with current technological constraints. 

While LLMs excel at generating text, they lack the ability to comprehend nuance, context, and the subjective nature of truth. As such, they cannot fully discern between fact and opinion or gauge the reliability of sources based on reputation or intent.

Last month, two lawyers in the United States were sanctioned after submitting a legal brief written by ChatGPT that included bogus court rulings and citations.

"I simply had no idea that ChatGPT was capable of fabricating entire case citations or judicial opinions, especially in a manner that appeared authentic," said one of the lawyers, Steven Schwartz, in a court filing.

In addition, LLMs may not always contain up-to-date information. According to OpenAI, the AI research laboratory that launched the widely known ChatGPT, the latest version, GPT-4, has a knowledge cut-off of September 2021. 

Therefore, one would not be able to rely on LLMs to verify current or developing issues, or respond to emerging misinformation or hostile information campaigns effectively. This limits the potential fact-checking value that LLMs could offer, especially in a time of crisis. 

Furthermore, the training data used to develop LLMs can introduce biases, reflecting the imperfections of the sources they learn from. This bias can inadvertently perpetuate certain narratives or reinforce existing societal prejudices

LLMs could also easily be exploited by bad actors who develop the models using only biased information to perpetuate their narratives and motives. Automating fact-checking with LLMs would require addressing these biases and ensuring a balanced and unbiased approach, which remains a significant challenge. 

INTENTIONAL INFORMATION CAMPAIGNS

LLMs have the ability to generate coherent and persuasive text on a wide range of topics, making them valuable resources for content creation. However, this power also opens the door to a potential misuse: The generation of fake news at an alarming rate. 

LLMs do not operate on an understanding of the language. Rather, LLMs churn out responses based on the information and data that they are provided with. As such, the accuracy of the output is dependent on the accuracy of the input. 

As LLMs are fed with vast amounts of information, there is a risk that they may incorporate biased or inaccurate data into their responses. This can inadvertently perpetuate false narratives, amplify conspiracy theories, or spread baseless claims that can have detrimental real-world consequences.

There is a risk that LLMs may incorporate biased or inaccurate data into their responses, and inadvertently perpetuate false narratives, amplify conspiracy theories, or spread baseless claims that can have detrimental real-world consequences. (Photo: iStock/seb_ra)

To illustrate the impact of language models on established fake news-generating tactics, the news-rating group NewsGuard found evidence of Russian state media using AI-generated chatbot screenshots to advance false claims. 

In a daily news segment posted on Mar 28, 2023, RT reported that “American-made AI-powered search engine ChatGPT lists the 2014 Maidan uprising in Ukraine as a coup that Washington had a hand in, among others. That's in stark contrast to the US narrative that it doesn't interfere in other countries”.

Additionally, a report published by NewsGuard last month found an alarming 217 (and counting) AI-generated news and information sites that were operating with little to no human oversight, showcasing the sheer scale of which LLMs can generate false narratives. 

Should generative AI be deployed for hostile information campaigns from organisations with significant resources at their disposal, then we would certainly be presented with fake news at an unprecedented scale.

Despite the dangers, there lies a glimmer of hope that LLMs can be employed as tools to assist fact-checkers in their crucial work. The vast amount of information available to LLMs can be harnessed by independent fact-checkers to cross-reference claims, statements, and articles against known sources of truth. 

By leveraging the speed and efficiency of these models, fact-checkers can potentially identify and debunk falsehoods with greater accuracy and at a faster rate, helping to combat the rapid spread of fake news in the digital age.

TRADITIONAL FACT-CHECKERS STILL NECESSARY

As we contemplate the potential role of LLMs in fact-checking, it is crucial to acknowledge the indispensable role of traditional fact-checkers. Human fact-checkers possess critical thinking abilities, subject matter expertise, and the capacity to discern shades of truth. They can delve into complex issues, cross-reference multiple sources, and evaluate the credibility of claims based on contextual understanding.

Traditional fact-checkers also have the advantage of being accountable to journalistic ethics and standards. They adhere to rigorous methodologies, follow strict guidelines, and prioritise accuracy and transparency. 

While LLMs can augment fact-checkers' efforts by assisting in data collection and preliminary analysis, we sincerely believe that the final evaluation and synthesis of information should remain in human hands.

In the age of information overload, combating fake news has become an urgent task. LLMs hold immense potential in assisting with fact-checking, but the risks associated with their misuse and the limitations of automation cannot be ignored. 

While LLMs can aid fact-checkers in processing vast amounts of data and identifying potential falsehoods, they lack the critical thinking abilities, contextual understanding, and domain expertise that human fact-checkers possess.

As we move forward, it is crucial to strike a delicate balance between embracing the potential of AI and acknowledging the value of traditional fact-checkers. 

While we await the inevitable rise of fully automated fact-checking, it remains a reality that traditional fact-checkers still have much to contribute in this arena. Hence, in our current climate, LLMs may be best suited as a tool to empower fact-checking efforts rather than to replace fact-checkers completely. 

Joel Skadiang and Jaga Naidu are respectively manager and researcher at Black Dot Research, an independent fact-checking platform in Singapore.

LISTEN - Daily Cuts: The Human Touch of AI Chatbots

Source: CNA/fl
Advertisement

Also worth reading

Advertisement