Written by: Sara P.
Edited by: Chris
Visual by: Johanna
The 2020 presidential election of the United States has more or less reached a conclusion, with Joe Biden of the Democratic Party projected to be president elect as of November 7. Although the Trump campaign is currently pursuing lawsuits against several states in an attempt to overturn the results, it is unlikely this will succeed, with the next president being formally declared on December 14.
This conflict-ridden election, despite drawing to a close, has left behind significant issues that must be addressed. Notably, there have been concerns expressed about the spread of misinformation on social media platforms. In the days following November 3, 16 of the top 20 public Facebook posts that included the word “election” featured false or misleading information in favor of Trump. On TikTok, a video presenting an erroneous conspiracy theory about ballots spread quickly and received over 200,000 views in mere hours. As such, the spread of misinformation poses a threat to democracy because it affects voters’ ability to access reliable information, thus making it difficult to come to an informed decision regarding which candidate to support. With over 70% of the American population on social media and exposed to potentially misleading information, the responsibility of technology companies continues to mount.
Technology companies have acknowledged the necessity of preventing election misinformation, and have taken various measures to do so on their respective platforms. For example, Facebook and Twitter flagged false or misleading tweets, forcing users to read the warning label before the tweet itself. However, it is imperative that companies not only restrict the visibility of misinformation, but also limit the speed at which it spreads through their platforms. This is because social media is a medium that allows information to spread rapidly to many users, which consequently leads to more voters being influenced.
Twitter, in particular, has taken steps to address this problem by prompting users to add their own opinions to a tweet before retweeting and contributing to its spread. The company stated that they hope this policy will “encourage everyone to not only consider why they are amplifying a tweet, but also increase the likelihood that people add their own thoughts, reactions, and perspectives to the conversation”. These measures, however, are still not enough to effectively control the spread of misinformation. According to Jennifer Grygiel, social media expert, Twitter’s current policies fail to curb the spread of tweets made by major figures like Trump, which receive almost instant traction as soon as they hit the internet. She suggests that alternative measures should be taken for prominent users, such as human moderators evaluating a tweet and deciding whether it needs to be flagged before publication.
The rampant spread of misinformation during the presidential election has revealed the need for more drastic actions in order to protect voters. Technology companies must bear this important responsibility to ensure that the events of 2020 do not repeat themselves in 4 years’ time.
Beckett, Lois, and Julia Carrie Wong. “The Misinformation Media Machine Amplifying Trump’s Election Lies.” The Guardian, Guardian News and Media, 10 Nov. 2020, http://www.theguardian.com/us-news/2020/nov/10/donald-trump-us-election-misinformation-media.
O’brien, Matt. “Did Social Media Actually Counter Election Misinformation?” AP News, Associated Press, 5 Nov. 2020, apnews.com/article/social-media-election-misinformation-632a5d93a6cc3ff37311a641d86bf5a1.
“Twitter Tightens Rules on Retweets and Victory Claims.” BBC News, BBC, 9 Oct. 2020, http://www.bbc.com/news/technology-54485697.