Passed in 1996, the Communications Decency Act includes a line under the Section 230 heading which reads: No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.
Written by (then) United States Representatives. Chris Cox and Ron Wyden, that line has been the shield online outlets have used to protect themselves from liability for certain kinds of content posted by users to their websites. It is important to note that Section 230 has nothing to do with content composed by publishers (user comments certainly count, though). Because of this, the law as become mired in the debate about online speech and the role moderation plays on online content platforms.
The statute is the focal point of the two sides of the American political spectrum—for almost diametrically opposite reasons. Republicans have claimed that it is at the source of infringement of conservative voices on popular online social media platforms. On the other hand, Democrats question why racist or violent content remains online despite calls for increased moderation.
Resolving these different perspectives won’t be easy. But what certainly isn’t helping things is the recurring problem of statue 230 being misinterpreted and misrepresented. These problems only confound the confusion over this already-complex issue. Corrections have been issued: Vox needed to rewrite a piece back in May with a mea culpa and more recently The New York Times took back a claim that Section 230 protects online publishers of hate speech (that’s the first amendment).
Some have accused legacy of media being behind some sort of coordinated attack to undermine the power of big tech. However, Jeff Kosseff, an expert on the subject, offers another explanation.
“I think the problem, really, is just that it’s a complex issue,” he says. “And people don’t think it’s a complex issue because it’s a short law. I mean, the core of it is 26 words. But the problem here is that it’s not intuitive.”
Kosseff is a cybersecurity law professor at the United States Naval Academy and the author of The Twenty-Six Words That Created the Internet, the 328-page book that tackles the story of Section 230. One thing Kosseff is pretty sure about: there’s no conspiracy. “I seriously doubt that the Sulzburger family is telling the New York Times business section to write that Section 230 protects hate speech. That seems a bit too far fetched for me,” he says.
That’s not to say all of the media coverage has been wrong either. Kosseff, who worked over seven years as a reporter at the Oregonian, points to pieces in Wired and The MIT Tech Review (both of which cite him) as successfully communicating Section 230’s intricacies. Going forward though, Kosseff recommends that reporters covering the online speech beat develop a better understanding of the statute and its history.
Although passed in 1996, Section 230 originally came about as a way to carry forward a legal principle decided in the 1950s and apply it to the online medium. Since the case Smith v. California in 1959, distributors of content have enjoyed protections against liability of the nature of content they do not produce, as was found in the California. (At that time “distributor” meant an outlet like a bookstore.)
The need for further clarity emerged after legal battles in the early 1990s. Two internet companies were targeted by people who felt they were harmed by the content found on the respective services. The cases resulted in two different legal outcomes. The first company, CompuServe, was spared liability for content on its platform as it had decided to not perform moderation. However, Prodigy, which did edit what was found on its service, was deemed liable for the content on its site.
Net not neutral
These two distinct outcomes are likely at the root of a common misinterpretation of the law: that a content platform needs to be a neutral party to defend itself from any legal claims. That is not the case. Section 230 was written so that content providers could decide what was added their sites by third parties without the risk of legal action.
According to Kosseff, this misinterpretation could be connected with the way the legal departments of news sites have historically navigated the law in relation to examples like crowdsourced projects, user-submitted reviews, and comment forums . “A huge misconception that even lawyers at news organizations have is that if you make any edits or moderate user content, you lose your Section 230 protections,” says Kosseff.
This point of confusion has manifested itself in the debate between the roles of publishers versus platforms. A May op-ed authored by GOP Senator Ted Cruz explained that Facebook risks losing its protection as a neutral platform and instead risked the liability of being a publisher. Except Section 230 specifically means that if you allow third party posting, you do not count as a publisher of that content (with some exceptions like content breaking federal criminal and intellectual property law). The difference between publishers and platforms is “not really a distinction under the law for Section 230,” says Kosseff.
In addition to encouraging reporters on the topic to do their due diligence, Kosseff recommends that publishers retain legal council with the most up to date knowledge. That’s the key to not only reporting correctly on topic of Section 230, but also managing news and other media websites with user generated content, as there could be negative consequences in the most common misunderstanding the law.
“It’s terrible for business reasons,” says Kosseff. “Because if you’re just sort of voluntarily keeping up all the worst content, you’re going to be driving readers away. And there’s no reason for you to do that.”