Skip to main content
Best News Website or Mobile Service
WAN-IFRA Digital Media Awards Worldwide 2022
Best News Website or Mobile Service
Digital Media Awards Worldwide 2022
Hamburger Menu
Advertisement
Advertisement

Commentary

Commentary: Elon Musk’s Community Notes feature on X is working

Crowdsourced fact-checking is better at curtailing misinformation than content moderation, says FD Flam for Bloomberg Opinion.

Commentary: Elon Musk’s Community Notes feature on X is working

Elon Musk asked EU Commissioner Thierry Breton to list alleged violations on X, formerly Twitter. (File Photo: AFP/Alain Jocard)

PROVIDENCE, Rhode Island: After Elon Musk bought Twitter (now X) in the spring of 2022, the social media company got rid of many of its behind-the-scenes moderators, slashed the system whereby users could flag tweets for review, and ramped up a different system to fight misinformation - a form of crowdsourcing called Community Notes.

A wave of outrage followed these changes. But the Community Notes feature has the benefit of transparency and shows scientific and medical merit. And a new academic review suggests it’s working - a least for scientific issues. 

Researchers who study social media still have serious concerns over rampant hate speech and incitements to violence - where people may react instantly and heatedly in a way that’s not suited to Community Notes. And in 2023 it became prohibitively expensive for researchers to get the data they needed to study these persistent problems.

But the lead author of this new study, behavioural scientist John Ayers of University of California, San Diego, said the data on Community Notes were easy to obtain. And for hashing out factual issues in areas such as science and health, social scientists have recommended a crowdsourcing approach, citing studies demonstrating the power of collective intelligence.

Several studies have pitted crowdsourcing against professional fact checkers, and found crowdsourcing worked just as well when assessing the accuracy of news stories.

ALMOST ALWAYS ACCURATE WITH HIGH-QUALITY SOURCES

Now, Ayers and other researchers looked specifically into the accuracy of X’s Community Notes, using the contentious issue of COVID-19 vaccines as a test case. The results, published recently in the Journal of the American Medical Association, showed the notes were almost always accurate and usually cited high-quality sources.

Community Notes relies on volunteers to flag misleading posts and then add corrective commentary complete with links to scientific papers or media sources. Other users can vote on value of the notes (a feature long used on Reddit).

The old system relied on fact-checkers whose identity and scientific credentials were unknown. They could take down posts they deemed to be misinformation, ban users, or use the more underhanded technique of “shadow bans” by which users’ post were hidden without their knowledge.

Content moderators employed by social media companies have also been attacked for moving too slowly and failing to take down hateful or violent content. It may be impossible for any social media company to keep up, which is why it’s important to explore other approaches.

The new system isn’t perfect, but it does appear to be pretty accurate. In the JAMA study, the researchers looked at a sample of 205 Community Notes about COVID-19 vaccines. They agreed the user-generated information was accurate 96 per cent of the time, and that the sources cited were of high-quality 87 per cent of the time.

While only a small fraction of misleading posts were flagged, those that did get notes attached were among the most viral, said lead author Ayers.

COLLECTIVE INTELLIGENCE IS OFTEN UNDERESTIMATED

Psychologist Sacha Altay, who was not involved in the new research, said people tend to underestimate the power of collective intelligence, which has proven surprisingly good for forecasting and assessing information - as long as enough people participate.

The public perception of social media misinformation is often distorted by political biases, outrage and self-delusion. Last year a group of researchers from Oxford University prompted some much-needed reflection with a study titled “People Believe Misinformation Is a Threat Because They Assume Others Are Gullible”.

In other words, the people most outraged about fake news aren’t worried they’ll be fooled; they’re worried others will be. But we tend to overestimate our own levels of discernment.

During the pandemic, fact checkers and moderators labelled lots of subjective statements as misinformation, especially those judging various activities to be “safe”.

But there’s no scientific definition of safe - which is why people could talk past each other for months about whether it was safe to let kids back into school or gather without masks. Much of what was labelled as misinformation was just minority opinion.

Twitter’s old censorship system was based on the assumption that people skip vaccines or otherwise make bad choices because they are exposed to misinformation.

But another possibility is that lack of trust is the real problem - people lose trust in health authorities or can’t find the information they want, and that causes them to seek out fringe sources. If that’s the case, censorship could create more distrust by stifling open discussion about important topics.

Of course, people don’t usually portray themselves as "pro-censorship", even if that’s what’s happening. Conservatives are more likely to accept censorship of material they deem indecent, while liberals are more likely to tolerate censorship of information they deem harmful.

But both sides should approve of any system that discourages blind assumptions and snap judgments and encourages open discussion, reflection and the deployment of collective brainpower. Musk is a divisive figure and there’s plenty to dislike about the recent changes in X, but at least Community Notes represents an upgrade.

Source: Bloomberg/el
Advertisement

Also worth reading

Advertisement