Login
Login is restricted to DCN Publisher Members. If you are a DCN Member and don't have an account, register here.

Digital Content Next

Menu

InContext / An inside look at the business of digital content

Censorship isn’t the issue. Controlling the spread is.

September 24, 2020 | By Jason Kint, CEO – DCN @jason_kint

In a recent meeting between the Vice President of Values and Transparency of the European Commission, Věra Jourová, and Twitter CEO Jack Dorsey, they agreed there should be more focus on how harmful content is distributed and shown to people rather than pushing for its removal. This is a critical point to be made as we enter what may be the most volatile two months in the history of our still-young democracy.

If you don’t know Jourova, she speaks from experience on civil liberties and technology. Time Magazine named her to their Time 100 last year and she played a key role in passing the General Data Protection Regulation (GDPR) privacy law in Europe. Her responsibility now includes watching over democracy and election integrity. Meanwhile, Jack Dorsey continues to lead Twitter bravely, despite the political risks. Facebook, however, continues to play a game of public relations to the continued detriment of our democracy.

Facebook would prefer that our public debate is focused on issues of free expression. Mark Zuckerberg can then symbolically drape the American flag around his shoulders and remind us of the importance of free speech while his company sidesteps the perils of his platform and dodges the thorny issues at the core of Facebook’s profit model. In positioning his argument this way, Zuckerberg creates a false case in which supporters of our democracy’s rights to free expression must agree with him.

In this crowded theater, he not only defends the right to yell fire, and light them, but also algorithmically fans the flames so that they spread out of control. And he does so in the very same month his company threatens to block all news in Australia and to withdraw from Jourova’s Europe because their profit model can’t survive regulation, quite a statement if you stop and think about it.

As long as Facebook focuses on the issue of content take-down versus minimizing or stopping its amplification, they avoid the real issue with the platform. And they are certainly not going to solve it.

Yes, Facebook has become an essential utility to the world despite harboring toxic sludge

This is undeniable by all parties, including Facebook. And it makes any scaled changes particularly sensitive. This is particularly problematic and impactful in nations without the freedoms of our First Amendment. For these countries, the positives and negatives of Facebook are even more pronounced. And reports of Facebook cozying up to authoritarian governments are even more troubling.

We need to stop debating the “censoring” of posts on Facebook

First, Facebook is a private platform and has the right to make decisions to remove or promote posts as it sees fit according to the (you guessed it) First Amendment. More importantly, press advocates should be uncomfortable with Facebook outright removing posts unless the information presents an immediate danger to the public. If a Facebook user wants to follow and read Alex Jones or Michael Moore, then so be it. If Jones, Moore, or any of these folks want to share questionable content (within limits around issues like inciting violence and hate) then so be it. These are consumer choices and the statements from Jourova and Dorsey imply they tend to agree.

Reach = velocity x amplification

It has been frequently stated in policy circles that “freedom of speech” does not equal “freedom of reach.” This is a clear and elegant way to illustrate how protections from the government or from Facebook’s freedom of expression bear hug should stop at the company’s decision to permit a post to survive on its platform. It is too simple an analysis to treat Facebook merely as a “platform” or the “21st century town hall.” Being neutrally available to all is one thing. However, it does not account for the platform’s design decisions that impact whether posts spread like wildfires or fade into history. The average person’s posts simply don’t travel like a Trevor Noah monologue or a statement from President Trump, and it’s not simply because they have more “friends.”

This is the argument presented in an important 2019 UK report positing why Facebook is neither platform nor publisher. If Facebook were the traditional model of a publisher, it would commission, pay for, edit, and take responsibility for the content it disseminates. And there are already renewed efforts to make social media carry more liability. At the same time, it’s not merely a platform because it continually changes what is and is not seen, based on algorithms and its own employees’ human intervention. If it were purely a platform, then that liability protection it receives may be warranted.

Content targeting suppresses counter-speech

An influential academic compared Facebook’s “black box” algorithms to the opacity of newsrooms in the 80s. This is disingenuous. As we stated in our 2016 letter to the CEOs of Facebook and Google, no one is arguing we should return to the world of gatekeepers and information scarcity. Any comparison to newspapers and television falls apart here as social media platforms reinforce a cycle of regurgitated bias. We must recognize these same algorithms that microtarget content to individual users also serve to suppress the counter-speech to that same content. No news media is entirely immune to bias. However, balance, integrity, and responsibility to the public are fundamental to their success. And consumer feedback and choice offer a sufficient check on this.

So, when Facebook executives share platitudes about free expression in response to harmful content, please ask why they continue to actively promote this harmful content with their algorithms. When they describe the labels they had to be pushed into affixing to harmful content, please ask why they haven’t gone a step further and put these labels on the harmful posts before they’re even visible to users. When they describe their reluctance to censor individuals, please ask why they haven’t removed the tools in their product that allow for massive amplification of questionable content. In a message all too familiar to us in 2020, we need to stop the spread

Liked this article?

Subscribe to the InContext newsletter to get insights like this delivered to your inbox every week.