Misinformation plays a big role in this year’s election. And with online platforms such as TikTok, Instagram, Twitter, and Facebook hosting much of it, is it possible that high school and college-aged voters are just as easily deceived by misinformation as their parents and grandparents? Additionally, how harmful are the effects of misinformation on U.S. adults in America, and how are social media companies trying to prevent the spread of it?
An analysis of the two groups: young voters and older adult voters
In a yearlong study, commissioned by the Knight Foundation and published by the nonprofit research institute Project Information Literacy, 6,000 students across 11 U.S. colleges responded that ninety-three percent of them received their news from discussions with their peers (i.e., online or face-to-face) weekly. In conjunction, nearly the same percentage of them (eighty-nine percent) also reported they also received their weekly news from social media, with daily consumption of news from social media being vastly higher than discussions with their peers.
How impacted are college-aged and high-school-aged voters?
However, despite that fact, in contrast to their parents and grandparents, college students’ technological capabilities make it much easier for them to go online and fact-check nearly every piece of information they receive.“
The rather contentious and poisonous public discourse around ‘fake news’ has substantially put young news consumers on guard about almost everything they see,” said John Wihbey, a Northeastern professor and one of the study’s key researchers.
But this does not mean that verifying such information leads to a conclusive outcome. In fact, forty-five percent of students report that “it’s difficult to tell real news from fake news”, with thirty-six saying that “fake news has made [them] distrust the credibility of any news”.
“That’s a double-edged sword because, on the one side, you’re arming young news consumers to be aware of the source of information,” said Wihbey. “On the other side, we don’t want to raise a generation not to believe in the power of well-reported, well-researched, well-sourced news.”
One such way, for those outside the realm of the younger generation, to see how Generation Z handles misinformation is to visit platforms such as TikTok or Reddit. Here, the practice of fact-checking every piece of information received through each video is virtually commonplace. And if one wants to see whether or not the information they are receiving through the video they are watching is false or not, they can just visit the comments, where others have already fact-checked for them.
“Fake news on TikTok is annoying and fun at the same time…since I can go to the comments and see it [fake news] get destroyed,” said senior Kevin Rodriguez. “[But] when I fact check something [if there are no fact checkers], it is to see the real data and similar opinions shared with me on the topic.”
Misinformation’s impact on U.S. adults in contrast to younger voters
Moving on to U.S. adults, while only eighteen percent receive their news through social media, in comparison to the 89% of college-aged students [as reported by the former study], data from the Pew Research Center suggests that those who do receive news from such outlets are usually less informed about the world around them and are less likely to be concerned about whether or not the stories they are reading are false or not. Moreover, 44% of those social media users believed that the COVID-19 conspiracy of the pandemic being planned ruled that the theory was at least “probably true”.
Furthermore, a more recent study by the center also showcases that 64% of Americans believe social media has a mostly negative effect on the way things are going on in the country today. Among the 64%, roughly a third cite misinformation as the main cause of the negative effect(s) of social media.
This trend among social media users believing such an outlandish theory along with many Americans believing that misinformation is the leading cause for the negative effects of social media leads to the discussion of whether or not social media companies such as Twitter or Facebook should block such misinformation to combat the damaging consequences of social media. And this discussion is currently ongoing right now.
Social media begins its fight against misinformation
Last Wednesday, the New York Post published a report revealing scandalous revelations on Hunter Biden, son of the Democratic presidential nominee, on unauthenticated “smoking gun” emails. In the unverified report, the New York Post alleges then-Vice President Biden’s son Hunter Biden attempted to introduce a Ukrainian company top executive he worked for to his father.
Because it was not yet authenticated by trustworthy third-party sources, social media companies Twitter and Facebook made the decision to limit the sharing of the article.
“While I will intentionally not link to the New York Post, I want to be clear that this story is eligible to be fact-checked by Facebook’s third-party fact-checking partners,” tweeted Andy Stone, a spokesman for Facebook. “In the meantime, we are reducing its distribution on our platform.”
Facebook’s decision to limit the spread of the article signaled a significant moment in its short history, especially after it rejected the notion of it being an arbiter of truth and instead proclaimed to stand for freedom of speech.
Similarly, Twitter made a statement later that day regarding its limitation of the spread of the article as well. Although, instead of having a spokesperson, they tweeted from the handle @TwitterSafety to clarify the reasons as to why they prevented the sharing of the article on the platform. More specifically, their “Hacked Material Policy”, which stated that Twitter does not “permit the use of our services to directly distribute content obtained through hacking that contains private information, may put people in physical harm or danger, or contains trade secrets.” Shortly thereafter, Twitter also locked the publication’s account after they published a series of tweets that broke Twitter’s rules, subjecting the New York Post to a total lockout until those tweets were deleted (which the publication still has not done at this time).
However, on Thursday, despite locking the company out, in response to Republican backlash and a Senate Judiciary Committee subpoena (after preventing the spread of the New York Post article), the company reversed its “Hacked Material” policy. The reasoning behind this citing the “significant feedback” it received regarding its handling of the recent articles from the New York Post said Twitter’s legal and policy chief, Vijaya Gadde.
Seniors attending Klein Cain shared the same sentiment echoed by the public regarding as to how both social media companies handled the misinformation event.
“So I’d say that although these two social media giants are definitely trying to atone for their past mistakes, especially with the 2016 election, a more clear explanation of their process would be better for all,” said senior Ryan Joseph. “With the recent Twitter Scandal about the Hunter Biden email leak and the blocking of the story from the New York Post, their intentions were for the best, but completely blocking it off, and not adding a tag to notify its unreliable nature, ended up just spreading the news further.”
Senior Drake Wells also had this to say.
“Twitter and Facebook have taken dramatic yet somewhat necessary action against media misinformation,” said Wells. “Although much of the news cycle coverage appears accurate, it is critical that these large-scale media platforms take action against inaccurate data or statistics if later found out to be false or skewed.”
Ultimately, this defining moment will serve as a foundation for the two companies as well as prospective major social media companies as to how they should approach misinformation in not only future elections but other events as well in the rapidly-changing Information Age.
“So more simply, be more careful,” Joseph advised. “They [Twitter and Facebook] are taking the right steps, but take time before completely blocking the link off, and at least notify its questionable reliability.”