Login
Login is restricted to DCN Publisher Members. If you are a DCN Member and don't have an account, register here.

Digital Content Next

Menu

Research / Insights on current and emerging industry topics

Social media must step up its game against disinformation

May 22, 2019 | By Rande Price, Research VP – DCN
Newspaper plane on fire

Disinformation comes in all shapes and sizes. Whether it takes the form of a text-based article, a meme, a video or photo – they are designed to go viral across message boards, websites and social platforms like Facebook, Twitter and YouTube. And, its polluting the internet. In fact, an Oxford Internet Institute study found that in the 30 days leading up to the 2018 U.S. midterm elections, a full 25% of Facebook and Twitter shares contained misleading and deceptive information claiming to be real news. Addressing concerns about domestic disinformation, Paul Barrett, of the NYU Stern Center for Business and Human Rights, identified the steps social platforms need to take to stop the problem in a new report, Tackling Domestic Disinformation: What the Social Media Companies Need to Do.

Disinformation epidemic

In the Report Barrett cites a MIT study that analyzed every English-language news story distributed on Twitter over an 11-year period and then verified the content of each story as either true or false. The study found that, on average, false news is 70//5 more likely to be retweeted than true news. Where are all these falsehoods coming from? A Knight Foundation study found that 65% of fake and conspiracy news links on Twitter could be traced back to just 10 large disinformation websites (e.g. Infowars).

First Amendment

Domestic disinformation is a constant in today’s digital experience. While there are many who call for its removal, others believe that it’s difficult to differentiate it from ordinary political communication protected by the First Amendment. Importantly, Barrett does not suggest that the government determine what content should be removed from social media. He believes social platforms can make better choices determining if content is accurate, and also how they promote and rank it.

Practices in place

Social platforms use machine learning to improve their ability to identify false stories, photographs, and videos. In addition, while Facebook previously flagged content to warn readers that it was potentially false, they now offer “Related Articles,” a feature that provides factually-reliable context about misleading stories. YouTube offers a similar program. When a user searches for topics that YouTube identifies as “subject to misinformation,” they preface video results with a link to information from reliable third parties. Even with these efforts, disinformation remains available on these platforms for anyone to view and share. Social platforms’ current practices are not enough in their fight to reduce disinformation.

Barrett’s recommendations:

  1. Remove false content. Content that is proven to be untrue should be removed from social media sites, not just demoted or annotated.
  2. Clarify the principles for content removal. Offer insight and transparency into what constitutes facts and rational argument versus the manipulation of information for disinformation purposes.
  3. Hire a senior executive who has company-wide responsibility for combating false information.
  4. Establish a more robust appeals process. Offer an opportunity for an appeal to a person or people not involved in the initial content removal decision.
  5. Step up efforts to purge bot networks. Increase efforts to eliminate automated accounts that imitate human behavior online.
  6. Retool algorithms to reduce monetization of disinformation. Doing so will diminish the incentive and therefore the amplification of fake news.
  7. Provide more data for academic research. The platforms have an ethical and social responsibility to provide data they possess to facilitate studies and tracking of disinformation.
  8. Increase industry-wide cooperation. Establish a data exchange and offer best practices across platforms to ensure common challenges are addressed.
  9. Support initiatives for digital media literacy. Teaching people to identify digital literacy and how to be more discriminating of online content should remain a priority.
  10. Sponsor more fact-checking and explore new approaches to authenticate news content. Continue fact checking efforts, a crucial first step to distinguish between truth and falsehoods.
  11. Support narrow, targeted government regulation. Identify specific content regulations similar to the degree of disclosure for online political advertising currently required for traditional broadcast media.

Barnett concludes that “neither the First Amendment nor international principles protect the lies on social media.” It’s essential for social platforms to step up their role in self-governing to ensure disinformation is not monetized or worse, used to manipulate people and trigger violence. Importantly, humans must remain in control of platforms, overseeing the impact of AI in all its forms. Scrutiny and transparency are key in uniting efforts to dismantle the prongs of disinformation.

Print Friendly and PDF