Saturday, June 13, 2020 | 2 a.m.
The internet and social media have propelled the spread of false claims, narratives, stories and information to unprecedented proportions.
According to the Pew Research Center, as of 2019, 72% of Americans had at least one social media account and about two-thirds of Americans at least occasionally got their news from social media platforms. People are increasingly seeing the world from the eyes of social media as usage of platforms like Facebook, YouTube and Twitter have only skyrocketed since the COVID-19 pandemic.
While social media produces enormous benefits by enabling more interconnection among people across the nation and around the world, issues emerge over how people utilize the technology. Political and social issues that sparks debate are almost inevitably fogged by information pollution, including misinformation and disinformation, which divides discussions along hardened political, moral, gender, religious or other divides that derails any hope of respectful, legitimate conversations. The potential harms are intensified since users have the ability to hide or fabricate their identity while creating content that can reach millions of people.
Because of its impact on the political environment in the United States, the pervasiveness of misinformation on social media makes it one of the great challenges of the 21st century.
While the term “fake news” is being thrown around as an accusation for information that typically contradicts a person’s beliefs, misinformation is a serious problem. False content accumulates more attention because it typically preys on the ignorance and emotional responses inherent in every individual, as seen in the 2016 election cycle, where false stories generated more engagement than factually reliable news. Misinformation has the greatest chance of becoming widespread when it contains content that exploits feelings of superiority, anger or fear against another group.
Conservatives and liberals are equally susceptible to believing false information that is consistent with their beliefs, but conservatives are more likely to share articles containing false information. It also spreads just as quickly as events develop.
The other problem is automation, with the presence of bots, trolls and cyborgs that have a heavy online presence and can perpetuate an array of often misleading narratives. Approximately 45-65% of users tweeting about COVID-19 are bots and often spread inaccurate information about the pandemic and medical advice. From my research on the spread of misinformation about the Oct. 1 shooting in Las Vegas, heavy Twitter bot involvement involves tweets created by celebrities. The followers of many public figures are bots, with 61% of President Donald Trump’s followers found to be bots, spam, propaganda or inactive accounts (as well as 50% of former California Gov. Jerry Brown’s followers).
Two weeks ago, Trump signed an executive order aimed at stripping away immunity for websites to create and moderate their platforms under Section 230 of the Communications Decency Act. This is primarily targeted at social media platforms but applies to all websites. It is believed that the president’s outrage was sparked after Twitter added fact-checking labels on two of his recent tweets. The social media platform, which owns the content that a user posts, recently released an update to its site that included adding labels for tweets containing potentially manufactured and/or harmful information. Twitter only removes tweets that are categorized as misleading information with severe propensity for harm. The rest are dealt with by adding either a warning, a label, or no action is taken. The president’s tweets remain accessible; fact-checking is not the same thing as censorship.
Stifling a platform’s ability to deal with misinformation can be consequential, especially during a pandemic while medical misinformation is pervasive. It is also authoritative in nature, as the president wants to “strongly regulate, or close them down.” He seeks to limit social media platforms (private entities) for not agreeing with him. Such action taken to regulate how information pollution is dealt with results in declining transparency of information and the confinement of the media.
Transparency and giving users access to fact-checking and user-checking tools should be the priority. Using tools like Botometer to check if a user is a bot or applying other programs like Hoaxy and Stopfake to check if content is false is something that can be done by users themselves.
We have a long way to go, as algorithms are by no means perfect and dealing with misinformation is walking a fine line between having no effect and infringing on the freedom to access information. Transparency should be encouraged and social media users need more tools to fact-check and determine whether a topic, tweet or user is manufactured. The government and social media companies need greater coordination, not retaliation motivated by narrow-minded impulses that damage the efforts made to deal with misinformation.
Mary Blankenship is a student at UNLV, where she is a chemistry and economics double major, and a Brookings public policy minor student in the Honors College.