A new study argues that social networks have unwittingly become complicit in amplifying fake news, and that a multidisciplinary effort is needed to better understand how the Internet spreads content and how it is consumed.

Photo by Ke Tang

Science & Tech

The ruse of ‘fake news’

5 min read

Researchers want to use science to combat techniques that can make the true seem false, and the reverse

As Americans increasingly turn to social media as their primary source for news and information, the dangers posed by the phenomenon of “fake news” are growing.

Reports of foreign influence on the 2016 U.S. presidential election are only the most high-profile example of how the infusion of misinformation into social media can influence democratic institutions. Determining how to measure and counter untruths in the digital age, however, is still in its early stages.

In a recent study described in the journal Science, lead authors Matthew Baum, the Marvin Kalb Professor of Global Communications, David Lazer, a professor at Northeastern University and an associate of the Harvard Institute for Quantitative Social Science, and more than a dozen co-authors argue that a multidisciplinary effort is needed to understand better how the internet spreads content and how readers process the news and information they consume.

Such broad-based efforts are necessary, the authors said, “to reduce the spread of fake news and to address the underlying pathologies it has revealed.”

“There needs to be some regular auditing of what the platforms are doing and how much this information is spreading,” Lazer added, “because there is a collective interest in the quality of the information ecosystem that we all live in.”

In addition to Baum and Lazer, the paper was co-authored by Yochai Benkler, Adam J. Berinsky, Kelly M. Greenhill, Filippo Menczer, Miriam J. Metzger, Brendan Nyhan, Gordon Pennycook, David Rothschild, Michael Schudson, Steven A. Sloman, Cass R. Sunstein, Emily A. Thorson, Duncan J. Watts, and Jonathan L. Zittrain.

The rise of fake news, the authors said, can be chalked up in part to two opposing trends in American society. Recent Gallup polls have found a growing mistrust of U.S. media, and studies have also said that nearly half of Americans “often or sometimes” get their news from social media, with Facebook being the dominant source.

While those platforms have enabled new information sources and voices to emerge, they have also made it far easier for people to engage only with homogeneous social networks and take in only information that affirms their own views, thereby exacerbating the ideological divides in the country.

“The internet has reduced many [previously enforced] constraints on dissemination of news. This allows outlets that do not embody these norms to compete online with those that do on a relatively more equal footing than was possible offline,” the authors argued in the paper. “This has contributed to the abandonment of traditional news sources that had long enjoyed high levels of public trust and credibility.”

In some cases, Baum and Lazer said, social networks have unwittingly become complicit in amplifying fake news.

As an example, they point to Twitter’s trending mechanism. When the platform notices a surge in tweets about a particular topic — such as a celebrity’s birthday or an approaching nor’easter — Twitter may list the topic as trending. But studies have repeatedly shown that the process can be manipulated. In one case journalists found that for as little as $200, a company in Saudi Arabia would deploy an army of bots to make any hashtag trend for a few hours.

To ensure that false content isn’t amplified across platforms, the study called on companies to do a better job of policing the use of software bots that control fake accounts — studies have estimated that anywhere from 9 to 15 percent of active Twitter accounts are bots, and that there may be as many as 60 million bots on Facebook — and identify and remove false content.

“Generally, the platforms should avoid accidentally amplifying low-quality content when detecting what is trending,” Lazer said. “That seems like a no-brainer.”

Though major companies like Google, Facebook, and Twitter have taken steps to counteract fake news, with Twitter moving to block accounts linked to Russian misinformation and Facebook announcing plans to shift its algorithm to account for “quality” in its content curation, the authors said the platforms have not provided enough detail about those steps for the research community to evaluate them properly.

The authors outline two primary strategies to stem the flow and influence of fake news: empowering individuals to better evaluate the credibility of news and news sources they encounter, and making structural changes to prevent exposure to fake news in the first place.

Though neither goal will be easy, Baum and Lazer admit, both could, over time, help restore citizen trust and credibility in news and information sources.

“Our call here is to promote interdisciplinary research with the objective of reducing the spread of fake news and of addressing the underlying pathologies it has revealed,” the authors wrote. “More broadly, we must answer a fundamental question: How can we create a news ecosystem and culture that values and promotes truth?”

This research was supported with funding from the Shorenstein Center at the Harvard Kennedy School and the NULab for Texts, Maps, and Networks at Northeastern University.