This week, 41 state attorneys general along with the District of Columbia filed lawsuits against Meta for creating highly addictive features that harmed the mental and physical health of children. The lawsuit is the latest in a series of revelations, inquiries and legal challenges focused on the allegedly misleading and negligent behavior of Meta with regard to the impact on children and teenagers from their services.
It’s unusual—and significant—for so many states to unite in a bipartisan effort to hold a Big Tech company accountable for consumer harms. The coordination shows states are prioritizing the issue of children and online safety and combining legal resources to fight Meta, in a similar vein as prior actions against Big Pharma and Big Tobacco
Actively addictive by design
For years, public health organizations and consumer groups have warned about the dangers of social media use by teens and children. Key features of Instagram and Facebook have been specifically called out as harmful. Numerous studies have shown that this segment of the population is especially susceptible to harmful psychological effects from design features, such as the “like” button, which research has found to be one of the most toxic components of social media.
Meta-designed notifications are particularly effective at repeatedly drawing young consumers back into their platforms while the Meta-designed algorithm keeps them engaged in the service for as long as possible so that the company can serve microtargeted ads. Such features include “infinite scroll,” persistent notifications and alerts, and autoplay of Stories and Reels. Other studies have shown that filters and other photo-altering features increase the incidence of body image issues among teenage girls.
The states’ lawsuit alleges that Meta deployed all of these tactics and more to “discourage young users’ attempts to self-regulate and disengage with Meta’s platforms.” The states included an enlightening direct quote from Sean Parker, founding CEO of Facebook:
“The thought process that went into building these applications, Facebook being the first of them . . . was all about: “[h]ow do we consume as much of your time and conscious attention as possible?” That means that we need to sort of give you a little dopamine hit every once in a while, because someone liked or commented on a photo or a post or whatever. And that’s going to get you to contribute more content and that’s going to get you . . . more likes and comments. It’s a social-validation feedback loop . . . exactly the kind of thing that a hacker like myself would come up with, because you’re exploiting a vulnerability in human psychology. The inventors, creators—me, Mark [Zuckerberg], Kevin Systrom on Instagram, all of these people—understood this consciously. And we did it anyway.”
The lawsuit goes a step further to allege that Meta misled the public about the dangers of using their services. In 2021, former Meta employee Frances Haugen came forward as a whistleblower to not only confirm that these harms were happening to children, but also reveal that Meta executives knew all along about the dangers from their own internal studies but chose to put profits over the safety of their products. In addition, the lawsuit alleges that Meta attempted to push the public narrative in the opposite direction by “routinely publish(ing) profoundly misleading reports purporting to show impressively low rates of negative and harmful experiences.”
Wider implications to watch
The proceedings inside the courtroom will be fascinating to watch. But I will also be closely watching two things outside of the courtroom:
First off, it will be fascinating to see whether advertisers will change their buying habits in the wake of these allegations. Advertisers have known about the problems associated with social media for years. Despite some public hand-wringing from their trade association and boycott threats, marketers’ buying habits are largely the same today. At some point, one would expect marketers to shift their ad budgets away from financially supporting this toxic content platform to premium environments that better pair with the brand identity they want to cultivate.
Secondly, I have to wonder if this may spur Congress to finally pass meaningful privacy or kids safety legislation. Every time there is a scandal and/or lawsuit involving one of the big tech platforms, there are renewed calls for legislation to regulate how they collect and use consumer data, to impose liability for the harms occurring on their services, and to create rules for how algorithms can be deployed among other things.
As a result of press coverage of these allegations, some of these bills might even get approved by the relevant committee(s). However, to date, none have been brought to the House or Senate floor for a vote. This Congress is particularly dysfunctional, but there is a decent chance that public officials heading into an election year might be shocked enough to coalesce around putting some guardrails on social media companies. Parents of children can be an influential voter base.
Stepping back a bit, all of these revelations about the dangers of social media and the abhorrent behavior of social media companies continue to fuel a global conversation about the role and impact of data, algorithms, surveillance advertising and unfiltered content. Lawsuits and legislation, which are getting smarter and more focused, will continue to draw headlines and potentially lead to liability for the worst actors. In the meantime, I am going to go give my kids an extra hug.