Las Vegas Sun

April 24, 2024

OPINION:

Birdwatch makes a mockery of content moderation

Now that Elon Musk is in charge of Twitter, what will become of the platform’s content moderation?

Musk has laid off a significant part of the content moderation staff, leaving users and advertisers wondering whether the platform will turn into a bottomless pit of disinformation and bigotry.

If Musk follows through on his plan to rely on Twitter’s current crowd-sourced content moderation feature called Birdwatch, launched in October, he may break the platform not long after taking the reins. Birdwatch is going as poorly as you might expect from a program that outsources content moderation to the masses.

Birdwatch allows average users, dubbed Birdwatch contributors, to add notes below tweets that provide helpful context from “different points of view.” Twitter claims that these notes will be attached to tweets only if a critical mass of contributors indicate that the note was helpful.

And therein lies the inherent flaw: What’s factual and what’s popular aren’t necessarily the same thing.

Public Citizen, a democracy and consumer advocacy organization and my employer, witnessed this breakdown firsthand on our organization’s Twitter account a few weeks ago.

Almost as soon as Birdwatch went live, a note appeared under a Public Citizen tweet claiming that we had shared a doctored image. The image, in fact, was a 100% genuine screenshot showing that Elon Musk had blocked our account.

In other words, after we truthfully and justifiably criticized a public figure with a huge fanbase on the platform, Twitter suddenly labeled Public Citizen a distributor of fake news.

Never mind Public Citizen’s decades-long reputation for combating disinformation. Never mind that the screenshot was real. Never mind that our social media team immediately provided clear documentation of its authenticity.

The alert that initially appeared under our tweet falsely claiming it was a doctored photo eventually came down. But when viewed through Twitter’s Birdwatch, our tweet is still followed by a series of “context” notes debating both sides of the issue and asking users to rate their level of helpfulness — as if popular opinion has any bearing on whether the screenshot is real or not.

Of course it’s real. We proved it. There’s nothing left to discuss or debate. Yet Twitter’s Birdwatch is still treating the matter as an open controversy.

We’re not the only ones encountering this problem. Birdwatch notes parroting right-wing talking points are already starting to appear on tweets from the White House and President Joe Biden.

A recent analysis of Birdwatch comments revealed that most of these notes are being placed under tweets about issues with high stakes. Top keywords within Birdwatch notes included COVID, vaccine, election, Trump and Biden.

Twitter has given users the authority to “fact-check” highly consequential information about public health and our democracy through a system that resembles a popularity contest. Content moderation should be based on informed research. Crowd-sourced opinion is simply not the way to determine the truth.

Twitter has already confirmed that it won’t do any quality control on Birdwatch notes, and it shows. Birdwatch’s own data indicates that false and misleading tweets alleging widespread voter fraud in the 2020 election were marked “not misleading.”

All it takes is a small, dedicated group of fanatics — or for that matter, clueless morons — to cast doubt on reputable sources of information and muddy the waters enough to leave many people unsure of what to believe.

The bottom line is that Birdwatch isn’t content moderation or fact-checking. It’s nothing but birdseed for conspiracy theorists, liars and frauds.

Given his vocal opposition to content moderation on social media, Musk’s acquisition of Twitter could lead to the end of Birdwatch. But it would be wrong for its predictable failure to be used to impugn more thoughtful, accurate and compassionate ways of moderating content.

For some users, it’s literally a matter of life and death. When harassment runs rampant online, vulnerable people and marginalized communities face serious risks of real-world harm.

One study found that Twitter’s failure to remove abusive content directed at women led to substantial increases in anxiety and depression. Shockingly, 41% of women polled said that “on at least one occasion, these online experiences made them feel that their physical safety was threatened.”

The gold standard of content moderation requires a nuanced approach that recognizes what’s at stake. Innovative technologies like hash-matching or AI moderators can contribute to the effort, but vetted fact-checkers are invaluable to maintaining a platform that is fortified against bad actors.

And before Twitter can expect random users to enforce their content moderation policies, they need to do it themselves. Their rules against nonconsensual sexual content and threats of violence often go unenforced. Only real moderators adequately responding to flags placed by AI or other users can fix this.

Responsible content moderation may not be sexy or edgy like sending rockets to Mars, but it’s an absolute must to prevent Twitter or any other platform from becoming a hellscape of unchecked hate speech and disinformation.

Musk has assumed the responsibility of managing one of the world’s largest and most influential social media platforms. Whether he agrees with it or not, sensible content moderation is indispensable to that job.

Cheyenne Hunt-Majer is the big tech accountability advocate for Public Citizen. She wrote this for InsideSources.com.