Many question the boundaries of social media discourse. Is free speech protected on platforms? And what are the limits and liabilities? As assured in the First Amendment, private companies like Facebook, Instagram, YouTube, Twitter, and TikTok have no legal obligation to protect an individual’s right to free speech.
They are, however, entitled to moderate content on their sites is also guaranteed by the First Amendment. Each social platforms sets its own community guidelines. However, all five of these platforms have rules that prohibit hate speech against individuals or groups based on protected characteristics including ethnicity, nationality, race, or religion.
And just 10 months ago, Facebook updated its guidelines to prohibit any content that denies or distorts the Holocaust. Mark Zuckerberg reflected on this change, “My own thinking has evolved as I’ve seen data showing an increase in antisemitic violence, as have our wider policies on hate speech.” In fact, the Anti-Defamation League reports that U.S. antisemitic incidents were at a historic high in 2020.
Yet despite stated guidelines and goals, the question remains: How effective is social media in protecting their users from online hate and disinformation on their platforms? Unfortunately, the answer is not good.
The Center for Countering Digital Hate’s (CCDH) works with practitioners in diverse fields to develop strategies that strengthen tolerance and democracy, and counter-strategies to new forms of hate and misinformation. It’s new report, Failure to Protect, finds that social platforms fail to act on 84% of reported anti-Jewish hate content. All five social platforms prohibit common forms of antisemitic content defined as hateful conspiracy theories, genocide denial, dehumanization and hate symbols.
The CCDH identified and reported 714 posts, between May and June of 2020, containing antisemitic content on Facebook, Instagram, Twitter, YouTube and TikTok. They used the platforms’ own reporting tools to measure the effectiveness of each companies moderating process. In aggregate, the 714 posts reached a total of 7.3 million impressions.
Importantly, of the 714 posts, only 16% were acted upon. Seven percent of the posts were removed while 8.8% of the accounts were remove, resulting in the removal of the antisemitic posts. However, 0.1% of the posts were labeled as false content but remain on the platforms.
Further analysis of the 714 posts identified that more than two-thirds — or 477 of them — contained anti-Jewish conspiracy theories. The platforms only acted on 11.5% of these posts, significantly lower than the 16% of the full sample. Interestingly, the CCDH concludes that social platforms are far less responsive to antisemitic conspiracy theories than other, more common forms of antisemitic content.
In addition, 277 posts or 39% of the sample were associated with extremist anti-Jewish hate content. This includes holocaust denial or minimization, references to inciting explicit violence towards Jewish people, racist caricatures of Jewish people, references to the blood libel and Nazi, neo-Nazi, or white supremacist content. Social platforms acted on 25% of these posts. With additional analysis, the CCDH findings show that the platforms ignore at least 80% of posts that dismiss the Holocaust and overlook 62% of posts that call for violence against Jews.
Hashtags add to the problem
Hashtags also allow for the proliferation of antisemitic content because they direct users to more content on a topic. But platforms have little incentive to restrict their use because they help fuel the social ecosystem. More content consumption generates more traffic and the serving of more ads. In other words, hashtags generate revenue. The 447 posts identified using antisemitic hashtags generated 3.3 million impressions across Instagram, TikTok, and Twitter.
The CCDH research clearly demonstrates the role social platforms play in increasing the visibility of hate and misinformation. Social media has no liability for the content posted on their platforms. Section 230 of the Communications Decency Act of 1996 provides immunity to platforms for third-party content. Unfortunately, hate drives traffic and social platforms profit from it.
The CCDH recommends how social platforms can correct their failings to act on anti-Jewish postings:
- Remove user groups dedicated to antisemitism.
- Ban antisemitic hashtags.
- Close accounts which send racist abuse to Jewish people.
- Hire, train, and support the moderators to effectively remove dangerous anti-Jewish hate.
- Legislators and regulators should ensure platforms are liable in the same way as any other person or corporation for the harms they create.
Despite on-going and valuable debates on regulation, legislation will not be an immediate solution. However, the other CCDH recommendations are clearly within the platforms’ control and should be an urgent priority.