Login
Login is restricted to DCN Publisher Members. If you are a DCN Member and don't have an account, register here.

Digital Content Next

Menu

Research / Insights on current and emerging industry topics

Understanding the science of fake news so we can fix the problem

March 14, 2018 | By Rande Price, Research VP – DCN

Fake news has a long history in media. Propaganda has been around as long as there have been sides of a story to spin. Even though journalists set their goals on objectivity and credibility, propaganda still makes its way into media coverage. Now with the Internet, it’s easy to quickly access and disseminate false information. Add to that the fact that social platforms amplify homogeneous opinions, whether they are true or not. And let’s face it: People do little to correct falsehoods in their closed social echo-chambers.

Today fake news is a global concern. And it is helpful to explore the mechanics of fake news (the mechanism that put it in place). New research, “The science of fake news,” from Professor Matthew Baum from the Harvard Kennedy School and Professor David Lazer from Northeastern University, offers insight into how the Internet spreads false content and how citizens process the news and information they consume.

Definition

The report defines fake news in terms of process, “information that mimics news media content in form but not in organizational process or intent.” Fake news intersects both misinformation (false or misleading information) and disinformation (false information that is purposely spread to deceive people). Importantly, the definition speaks to the absence of the editorial process, especially the practice of accuracy and credibility that differentiates fake news from credible.

Estimates

There are inconclusive estimates to how prevalent fake news is today in social media. One estimate circulating states that approximately one to three stories per month were fake prior to the 2016 elections. Facebook claims less than one percent of its civic content shared on its platform is fake. Some estimate that nine to 15% of active Twitter accounts are bots and that there are 60 million bot accounts on Facebook that share fake news. The wide-ranging estimates highlight the lack of collaboration between social platforms and academics, which is required to clearly define the scope of the problem.

Interventions

Baum and Lazer identify two categories of potential interventions to prevent fake news:

  1. empowering individuals to evaluate the fake news they encounter, and
  2. creating structural changes on social platforms aimed at preventing exposure of users to fake news in the first instance.

Empowering users to question false news is not so easy. Baum and Lazer cite outside academic research showing that people do not question the credibility of information unless it violates their preconceptions. Since people on social media tend to align with others with similar beliefs in their community, they question little. In fact, once concern is that individuals fact-checking context, may increase their likelihood of accepting the information as true and share it, when in fact it is not.

There is also evidence, experimental and survey research, that confirms that the perception of truth increases when misinformation is repeated. Even if the misinformation is paired with an effective retraction, it is still often viewed as being true. Teaching individuals to be critical-thinkers, especially when using social media, may assist in preparing students to ask questions about the credibility of the content and sources.

In terms of interventions, the platforms can implement policies and technologies to deter the spread of fake news. The inclusion of quality sources in algorithmic rankings of content, the limit of personalization of political content and the prevention trending content including any with bot activities. The platforms claim some integration of these policies. Yet little to no evidence is shared with policy-makers or the general public on the research or its effectiveness.

Call for action

Direct government involvement is sensitive and carries a risk. Any process by which the government or the platforms prevent users from seeing content raises concerns about censorship. While the 1996 Communications Decency Act protects social media for the false content written and shared on their platforms, their involvement and accountability in affecting the human experience should be questioned. Baum and Lazer call for a redesign of our information ecosystem that promotes a transparent and truthful news media experience. Publishers, social platforms and academics must collaborate to conduct a detailed and transparent audit on how the major platforms filter information.

Print Friendly and PDF