Las Vegas Sun

March 28, 2024

YouTube unleashed a conspiracy theory boom. Can it be contained?

YouTube

Corey Olsen / The New York Times

A screen shot from the YouTube video “Conspiracy Theories with Shane Dawson,” Feb. 12, 2019. Dawson’s conspiracy series arrived at a particularly awkward moment for YouTube, which has been reckoning with the vast troves of misinformation and extreme content on its platform.

In January, YouTube star Shane Dawson uploaded his new project: a 104-minute documentary, “Conspiracy Theories With Shane Dawson.”

In the video, set to a spooky instrumental soundtrack, Dawson unspooled a series of far-fetched hypotheses. Among them: that iPhones secretly record their owners’ every utterance; that popular children’s TV shows contain subliminal messages urging children to kill themselves; that the recent string of deadly wildfires in California was set on purpose, either by homeowners looking to collect insurance money or by the military using a type of high-powered laser called a “directed energy weapon.”

None of this was fact-based, of course, and some of the theories seemed more like jokey urban legends than serious accusations. Still, his fans ate it up. The video has gotten more than 30 million views, a hit even by Dawson’s standards. A follow-up has drawn more than 20 million views and started a public feud with Chuck E. Cheese’s, the restaurant chain, which was forced to deny claims that it recycles customers’ uneaten pizza slices into new pizzas.

Dawson’s conspiracy series arrived at a particularly awkward moment for YouTube, which has been reckoning with the vast troves of misinformation and extreme content on its platform.

In late January, the company announced it was changing its recommendations algorithm to reduce the spread of “borderline content and content that could misinform users in harmful ways.” It cited, as examples, “videos promoting a phony miracle cure for a serious illness, claiming the Earth is flat or making blatantly false claims about historic events like 9/11.”

Dawson, whose real name is Shane Lee Yaw, has more than 20 million subscribers and a devoted teenage fan base. He has built his lucrative career by, among other talents, understanding what kinds of content plays well on YouTube.

For years, that meant conspiracy theories — lots and lots of them, all delivered with the same wide-eyed credulity. In a 2016 video, he wondered aloud if the first Apollo moon landing was staged by NASA. (“It’s a theory,” he said, “but, I mean, all the evidence is not looking good.”) In 2017, he discussed the false theory that the attacks of Sept. 11, 2001, were a hoax. (“I know it’s crazy,” he said, “but just look at some of these videos.”) And last year, he devoted a segment of a video to flat-Earth theory, which he concluded “kind of makes sense.”

In fairness, Dawson is a far cry from partisan cranks like Alex Jones, the Infowars founder, who was barred in 2018 by YouTube and other social networks for hate speech. Most of Dawson’s videos have nothing to do with conspiracies, and many are harmless entertainment.

But the popularity of Dawson’s conspiracy theories illuminates the challenge YouTube faces in cleaning up misinformation. On Facebook, Twitter and other social platforms, the biggest influencers largely got famous somewhere else (politics, TV, sports) and have other vectors of accountability. But YouTube’s stars are primarily homegrown, and many feel — not entirely unreasonably — that after years of encouraging them to build their audiences with viral stunts and baseless rumor-mongering, the platform is now changing the rules on them.

Innocent or not, Dawson’s videos contain precisely the type of viral misinformation that YouTube now says it wants to limit. And its effort raises an uncomfortable question: What if stemming the tide of misinformation on YouTube means punishing some of the platform’s biggest stars?

A representative for Dawson did not respond to a request for comment. A YouTube spokeswoman, Andrea Faville, said: “We recently announced that we’ve started reducing recommendations of borderline content or videos that could misinform users in harmful ways. This is a gradual change and will get more and more accurate over time.”

Part of the problem for platforms like YouTube and Facebook — which has also pledged to clean up misinformation that could lead to real-world harm — is that the definition of “harmful” misinformation is circular. There is no inherent reason that a video questioning the official 9/11 narrative is more dangerous than a video asserting the existence of UFOs or Bigfoot. A conspiracy theory is harmful if it results in harm — at which point it’s often too late for platforms to act.

Take, for example, Jones’ assertion that the 2012 mass shooting at Sandy Hook Elementary School in Newtown, Connecticut, was a hoax perpetrated by gun control advocates. That theory, first dismissed as outrageous and loony, took on new gravity after Jones’ supporters began harassing the grieving parents of victims.

Or take Pizzagate, a right-wing conspiracy theory that alleged that Hillary Clinton and other Democrats were secretly running a child-sex ring. The theory, which was spread in a variety of videos on YouTube and other platforms, might have remained an internet oddity. But it became a menace when a believer showed up at a pizza restaurant in Washington, D.C., with an assault rifle, vowing to save the children he believed were locked in the basement.

To its credit, YouTube has taken some minor steps to curb misinformation. In 2018, it began appending Wikipedia blurbs to videos espousing certain conspiracy theories, and changed the way it handles search results for breaking news stories so that reliable sources are given priority over opportunistic partisans. And this past summer, it was among the many social networks to bar Jones and Infowars.

In a multipart Twitter thread this month, Guillaume Chaslot, a former YouTube software engineer, called the company’s decision to change its recommendation algorithm a “historic victory.”

Chaslot noted that this algorithm — which was once trained to maximize the amount of time users spend on the site — often targeted vulnerable users by steering them toward other conspiracy theory videos it predicts they will watch.

The change “will save thousands from falling into such rabbit holes,” he wrote.

In an interview this past week, Chaslot was more circumspect, saying YouTube’s move may have amounted to a “PR stunt.” Because the change will affect only which videos YouTube recommends — conspiracy theories will still show up in search results, and they will still be freely available to people who subscribe to the channels of popular conspiracy theorists — he called it a positive but insufficient step.

“It will address only a tiny fraction of conspiracy theories,” he said.

In 2018, Chaslot built a website, AlgoTransparency.org, to give outsiders a glimpse of YouTube’s recommendation algorithms at work. The site draws from a list of more than 1,000 popular YouTube channels, and calculates which videos are most often recommended to people who watch those channels’ videos.

On many days, conspiracy theories and viral hoaxes top the list. One recent day, the most frequently recommended video was “This Man Saw Something at Area 51 That Left Him Totally Speechless!,” which was recommended to viewers of 138 channels. The second most recommended video, which linked a series of recent natural disasters to apocalyptic prophecies from the Book of Revelation, was recommended to viewers of 126 of those top channels.

Chaslot suggested one possible solution to YouTube’s misinformation epidemic: new regulation.

Lawmakers, he said, could amend Section 230 of the Communications Decency Act — the law that prevents platforms like YouTube, Facebook and Twitter from being held legally liable for content posted by their users. The law now shields internet platforms from liability for all user-generated content they host, as well as the algorithmic recommendations they make. A revised law could cover only the content and leave platforms on the hook for their recommendations.

“Right now, they just don’t have incentive to do the right thing,” Chaslot said. “But if you pass legislation that says that after recommending something 1,000 times, the platform is liable for this content, I guarantee the problem will be solved very fast.”

But even new laws governing algorithmic recommendations wouldn’t reverse the influence of YouTube celebrities like Dawson. After all, many of his millions of views come from his fans, who subscribe to his channel and seek out his videos proactively.

YouTube’s first challenge will be defining which of these videos constitute “harmful” misinformation, and which are innocent entertainment meant for an audience that is largely in on the joke.

But there is a thornier problem here. Many young people have absorbed a YouTube-centric worldview, including rejecting mainstream information sources in favor of platform-native creators bearing “secret histories” and faux-authoritative explanations.

When those creators propagate hoaxes and conspiracy theories as part of a financially motivated growth strategy, it seeps in with some percentage of their audience. And sometimes — in ways no algorithm could predict — it leads viewers to a much darker place.

It’s possible that YouTube can still beat back the flood of conspiracy theories coursing through its servers. But doing it will require acknowledging how deep these problems run and realizing that any successful effort may look less like a simple algorithm tweak, and more like deprogramming a generation.