Research findings show that ad delivery optimization can often skew the exposed audience by gender or race. Specifically, research found that Facebook algorithms used to optimize a target audience discriminated in its delivery of job advertisements. New research, Auditing for Discrimination in Algorithms Delivering Job Ads, expands on this work to present an auditing methodology for gender bias in audience qualifications.
The auditing methodology analyzes the delivery skew of the algorithmic optimization. Further, it detects if the skew is due to ad targeting qualifications or the platforms optimization learning process. Importantly, if offers insight into social platforms’ black box algorithmic systems.
Proprietary ad platforms, algorithms, and data make it difficult to audit Facebook and LinkedIn. To overcome this, researchers created an external auditing process. Facebook’s and LinkedIn’s custom audience feature allows advertisers to build audience targets on the platforms. This offers the ability to infer the gender of the ad recipients for platforms that do not provide post-delivery statistics.
The authors, Basileal Imana, Aleksandra Korolova, and John Heidemann, registered as advertisers on both Facebook and LinkedIn. They ran ads for real employment opportunities on both platforms and audited the results.
To test for a bias in the algorithmic choices of the platforms, a set of ads run to compare audience delivery. Each set of ads must be similar in audience requirements. This permits the audience delivery to equally qualify (or not) audience members to each of the ads with a consistent assessment.
The set of ads must also exhibit a true bias in the real-world. A comparison of the delivered audience with an actual audience bias offers an important point of comparison. Real-life bias and non-bias factors are constantly informing the algorithms in a platform’s continuous learning process. Therefore, if there is a significant skew in the platform’s audience delivery, it likely stems from the optimization process. In other words, the system overrides the requested audience requirements to supply its preferred optimized audience.
The researcher’s setup three tests to compare audience delivery. They include pairs of ads for delivery drivers, sales associates, and software engineers. The ads ran on both LinkedIn and Facebook. Campaign goals were exactly the same and included conversion (clicks), to maximize the number of job applicants and reach, to increase the audience exposure.
The first test included ads for delivery drivers for Domino’s and Instacart, both with identical job requirements. Note that, in practice, Domino’s has a higher male composition of drivers while Instacart has a higher female composition.
The test results show evidence of a statistically significant gender skew in audience delivery on Facebook but not on LinkedIn. Facebook’s audience delivery is in line with the actual male skew of Domino’s, even though the campaign requested a gender-balanced audience. Facebook’s algorithmic optimization trained on real-life data adjusted the campaign’s audience delivery.
This unique research offers a new auditing methodology to detect bias audience delivery on social platforms. It offers insight into how ad platform algorithms adjust for platform objectives and override the advertiser’s requirements.
Further, testing on Facebook and LinkedIn demonstrates that the methodology is applicable to multiple platforms and that not all social platforms produce biased results. Importantly, this research offers an auditing solution to protect marketers from unwanted biases baked into social platforms black box algorithmic systems.
“Don’t worry about the world coming to an end today. It is already tomorrow in Australia.”
—Charles M. Schultz
For those focused on where the future of the internet media economy is headed, all eyes turned to Australia in recent weeks. And, despite a last-minute PR spin campaign filled with half-truths and outright deception by pundits around the world vying to influence the debate, the end result is a new law, the News Media and Digital Platforms Mandatory Bargaining Code, which foreshadows the future for Google and Facebook.
We’ve had enough hot takes. It’s time to kill off, once and for all, the disinformation, misinformation, and talking points of the infamous duopoly, which I’ll try to do here by busting 10 myths. (Happy to talk through any of these further. Just reach out: publicly or privately.)
1. The bargaining code resulted from an arbitrary process.
This couldn’t be further from the truth. The Australian government undertook a thorough, multi-year process to establish this new law. Importantly, its competition regulator (ACCC) spent nearly two years investigating the dominance of Google and Facebook. The result was a 600+ page report that clearly demonstrates the imbalanced bargaining power held by the duopoly. At the same time, the Australian government – with support from all political parties – formulated a public policy decision about how to better fund journalism. The only assumption made was that the press is critically important to democracy. This should not be a controversial assumption.
2. The inventor of the web has called this a “link tax” that will break the internet.
Yes, Tim Berners-Lee wrote a letter rightly expressing his concern that if it was possible to require payment for links throughout the web, it could break the internet. In the letter, he often hedged and clearly wasn’t focused on the specifics of the law, which does not require payment for links. We’ve seen this “link tax” talking point high on Google’s list in the past. And it’s a galvanizing force for defenders of the open web – as it should be. The law does mention linking but only in the context of describing what a digital platform does by publishing, curating, and linking to the news. It’s narrowly focused on the two platforms and in no way does it suggest a platform should be required to compensate for its links to news outlets.
3. Facebook won key concessions at the final hour.
The concessions for Facebook were, in fact, relatively minor. When Facebook pulled news off Australian users’ feeds, their goal was to trigger global outrage, shake up the press cycles, and turn the globe against Australia. Then, by throwing its PR might behind some elegant spinning about a great compromise, Facebook saved face. That’s all they did.
The final concessions included the addition of a couple windows of time (measured in months) in which Facebook will lobby and protest about having to pay for news. The “concessions” also included two changes that clarify the mechanics of the code.
4. This law is a gift to Murdoch and hurts everyone else.
Yes, News Limited has a lot of influence in Australia as its leading news company. In fact, as the ACCC Chairman noted, 80-90% of the journalists in Australia work for one of three companies (News, Nine, ABC). However, the idea that Google or Facebook would negotiate with these three companies and hang the other 10% of the market out to dry seems very unlikely considering the modest amount of additional funds it will take to round out the rest of the industry.
Having spent a significant amount of time on Australia’s 600+ page report, I would suggest that the law is much more clearly in the camp of increasing bargaining power for all journalism. It’s hard to argue that any news publisher with more than $150k in revenue per year isn’t better off with this law in place. Moreover, the law also allows for publishers to collectively bargain if they prefer. That does seem likely if they’re not getting what they need from Facebook and Google.
5. News Corp’s global deal with Google will result in settlements of antitrust lawsuits.
This anti-antitrust argument is the silliest thing I’ve heard. There is no greater fallacy in digital media right now than attributing the global antitrust scrutiny on Google and Facebook to one or even just a few parties. The antitrust lawsuits currently filed have the weight of the U.S. government, in 49 out of 50 states, and both parties in Congress.
The cases are robust, particularly the Texas-led advertising tech case against Google (which mirrors some of the work of the ACCC), Congress. Another is from the CMA, which is the UK’s comparable regulator. It also alleges a Section 1 charge of bid rigging between Facebook and Google. No market regulator walks away from these cases based on the whims of one complainant. The work in Australia has only added weight to these cases.
6. Publishers in Europe should be celebrating.
Globally, there is a lot of positive reaction to the work done in Australia. However, it’s also notable that the European market has been working for even longer on better funding professional content. In successfully passing an updated copyright directive, they’ve taken an approach that establishes additional rights for publishers through a publisher’s “neighboring right.”
Importantly, the European approach is not restricted to just news. It covers all content including snippets offered on Facebook and Google. France was first to bake this new right into law. Google responded by trying to avoid paying for anything that they’ve historically taken for free.
They’ve even invented a new product offering, Google News Showcase, to bury their payments and bundle in all rights needed. This minimizes any increased bargaining power for publishers, which has caused even more scrutiny. This opaque bundling of payment for rights by Google and Facebook keeps popping up wherever they face regulatory threats. If payment for snippets isn’t clearly delineated, and the financial terms aren’t transformative, the EU is likely to view Australia’s new law as a missed opportunity.
7. Government will set an arbitrary price for platforms to pay for news content.
The reality is that Australia has come up with a solution that uses market forces by requiring negotiated deals with publishers ahead of a mandatory bargaining code or an arbitration process. It uses a clever “final offer” process (also known as “baseball arbitration”) to finalize deal terms. In both cases, the government recognizes its weaknesses in over regulating a fast-growing digital marketplace. Instead, it leverages its antitrust enforcement to create a carrot and then a stick to get companies benefiting from a gross imbalance in bargaining power to the table to properly and quickly negotiate.
8. A straight platform tax would be a better solution.
The simple problem with a straight tax is all content would need to be “treated equally.” A click on Breitbart would have the same value as a click on The Wall Street Journal. The government would then divvy up a pot of money between everyone. This creates all sorts of uncomfortable government leverage over news. And one can only imagine how they would choose to split the loot. Think about the market incentives if they divided it up based on monthly uniques or page views. Better to push negotiations back into the market where intangibles such as brand, heritage, trust, consumer perception, and scoops have significant value.
9.This is unique to Australia and won’t translate to other countries.
Every lawmaker in Canada, Europe, the U.K., and U.S. who is focused on these issues will draft off Australia. Arguably, this was the biggest concern for Google and Facebook. They hoped to limit discussion to an island on the other side of the world. Our global market no longer works this way. Everyone learns from each other. Despite Facebook and Google’s ability to leverage their global dominance to protect their fortresses through trade deals, lobbying, and ducking lawmakers, the whole world is catching up to them.
Also, there are major allies in these fights to support the free and plural press. Microsoft, one of the few companies larger than Google or Facebook, has aligned with publishers on the new policies in Australia and Europe. (Forgive me, as Microsoft’s support evokes the classic “That’s not a knife” scene in Crocodile Dundee.)
10. Facebook and Google have pledged $1 billion each to news publishers so we should be happy.
Together, these two companies will easily surpass $250 Billion in global advertising revenues in 2021 without even participating in China. As of now, they’ve pledged $1 billion each towards journalism over three years. Thus, Google and Facebook are pledging barely 0.2% of their global advertising revenues towards journalism. Facebook’s protesting of payments was evidence in itself for Representative Cicilline to state that the company “is no longer compatible with democracy.” (And I tip my hat to the publisher that flatly stated that Facebook’s offer was not enough.)
These are two globally-scrutinized companies which pride themselves on moonshots. Yet they have failed to properly address how their algorithms help spread misinformation, disinformation. This has led to genocide in Myanmar, an insurrection on our Capitol, and health misinformation causing untold illness and death worldwide … to name just a few “unintended consequences.”
The future is now
The simple fact is that Facebook prefers to pay into journalism no more than it does for fake news from Macedonia, while continuing to grow its nearly $100 billion per year business of surveilling and microtargeting citizens with ads against the cheapest engagement available. They’ve devalued context. They’ve devalued facts. And they’ve devalued journalism for profits. In Australia, we see democracy fighting back.
Australia’s law has been endorsed by all major political parties in a representative democracy as a means to better fund journalism. Importantly, though this was rarely discussed, the code has a one-year review period to see how it’s working. If you listened too closely to American pundits the last few weeks, you would have thought this was the end of the open Internet – hypocrisy considering the closed platforms of those who shaped it.
The law prevailed. The world didn’t end. In fact, it’s already tomorrow in Australia. They are ahead on this one, and there is a lot we can learn from it.
Social media continues to grapple with the spread of misinformation on their platforms. And consumers know this. Regardless, they continue to use social media as a primary news source. According to the most recent Pew Research Center survey, more than half of U.S. adults (53%) report that they get their news from social media “often” or “sometimes.” The survey was taken by nearly 10,000 U.S. adults.
Facebook ranks highest (36%) as the number one news source consumers use regularly among 11 social platforms. YouTube ranks second at 24% and Twitter ranks third with 15% of adults regularly getting their news there. Fewer consumers say they get their news regularly on Instagram (11%), Reddit (6%), Snapchat (4%), LinkedIn (4%), TikTok (3%), WhatsApp (3%), Tumblr (1%), and Twitch (1%).
Interestingly, despite the fact that they often find their news on social media, consumers question the accuracy of the news they get on these platforms. Approximately six in 10 consumers (59%) say that they expect the news on social platforms to be largely inaccurate. Unfortunately, the data shows little change over the last three years. Even after two congressional hearings, there’s still an abundant amount of vaccine, Covid-19, and the 2020 presidential election misinformation on social media.
Social media does little to help consumers interpret the news. In fact, less than one-third (29%) of consumers believe the information they received on social platforms helps their understanding of the news. Further, 23% believe the news on social media leaves them more confused and 47% report that it doesn’t make much of a difference.
More women than men (63% vs. 35% and 60% vs. 35%, respectively) use social media to access their news. However, Reddit has a distinctly different demographic. Among its regular news consumers, two-thirds are men compared to women (67% vs. 29%).
Consumers use social media as an easy and accessible path to news and information. However, this Pew study clearly shows consumer are aware of misinformation on social media. Increased awareness is a good thing and an important step to expose and defuse misinformation.
Social platforms continue to try to combat misinformation with fact-checkers and other programs. Twitter launched a new program, “Birdwatch,” which allows Twitter users to comment and provide context on tweets that they believe are misleading or false. Unfortunately, none of these programs are winning the fight against misinformation. A recent investigation of Facebook found 430 pages with 45 million followers monetizing misinformation with Facebook tools. Clearly, more needs to be done to stop the dissemination and monetization of misinformation on social platforms.
Held virtually and expanded to five days, the 2021 edition of the member’s-only DCN Next:summit (February 1-5) was certainly unlike any that came before. Fittingly, CEO Jason Kint kicked things off by reflecting on all that has changed over the past year and, perhaps more importantly, what has not.
“Publishers have been covering three of the biggest stories of our generation, all intersecting at the same time,” he said. “Your ability to stay true to your brands and to the public trust, despite personal and professional obstacles, has been remarkable.”
Amid all of this, Kint reminded attendees that the industry will need to keep its priorities straight to fuel a stronger digital media marketplace. Indeed, a broad theme of the event was the many ways publishers are adapting to shifts accelerated by the pandemic by deepening their direct relationships with audiences.
Platform power plays
Constellation Research founder and chairman Ray Wang expanded on that topic in the opening session, an interview by BBC correspondent Larry Madowo. Noting increased competition from outside the industry, Wang called for greater cooperation among media companies.
“What we have is a fracturing in the marketplace, which is making it very hard to compete with the digital giants,” he said. “In order to succeed, you have to band together.”
Axel Springer CEO Mathias Döpfner told Axios media reporter Sara Fischer that the “immensely powerful position” of tech platforms will need to be addressed by regulators. At the same time, he shared an optimistic outlook for the future of journalism. Unlike the print-centric business he took over 20 years ago, digital journalism carries lower costs, he said, allowing media companies to invest more heavily in editorial.
“You have no deadline. You have unlimited space,” Döpfner said. “And you can combine all aesthetic forms of journalism. It can be video, it can be audio, it can be text, it can be all combined. I think we are still in the early days of digital journalism and its creative potential.”
Monopolies and media models
Döpfner added that there’s a future for both subscription- and ad-supported journalism on the web, and that many organizations will continue with a mix of both. The future of advertising, however, depends on the role of platforms.
On the contrary, NYU marketing professor Scott Galloway said the key to survival for media companies will be subscriptions. He said that giving content away for free to “innovators and algorithms” was “the biggest mistake journalism ever made.”
Interviewed by Henry Blodget, the CEO of Axel Springer-owned Insider Inc., Galloway added that regulators should further address platforms’ data collection capabilities to mitigate their harmful effects.
POLITICO antitrust reporter Leah Nylen and Yale economist Fiona Scott Morton then explored potential regulatory remedies to the anti-competitive practices of tech companies. Scott Morton encouraged media companies to help educate regulators on the impact of “dominant advertising intermediaries,” such as Google.
“These markets for digital advertising are not something that most people understand,” she said. “It requires effort on the part of the affected parties to help move the conversation forward and push regulators in a direction that’s good.”
“There’s plenty of room for other digital journalism outlets to survive and thrive,” she said.
“We’re still in the early days of the pay model. It wasn’t that long ago that everybody said things like ‘digital news wants to be free.’ Some of our journalistic competitors are having great years for subscriptions. We look at all of that as making a market.”
To build on the 2.3 million digital subscriptions the Times sold in 2020, Kopit Levien said the outlet will be investing in covering live and developing news. Additionally, she suggested that publishers should work to reduce their dependence on third-party data to help create better digital experiences for subscribers.
Meeting audiences whenever, wherever
CNN chief media correspondent Brian Stelter sat with CBS News president Susan Zirinsky for a discussion on how the pandemic has accelerated shifts in the TV news business. Gone are the days of holding major scoops or interviews for primetime, Zirinsky said. Even broadcast news must adapt to a 24/7, cross-platform model.
“We want to give people facts,” Zirinsky said. “We want to share information. This is really what it’s about: being on every platform that is available, taking our unique content and putting it in as many places as a consumer is.”
One of those rising platforms, audio, was the topic of conversation between Gimlet Media head of content Lydia Polgreen, Pineapple Street Studios co-founder Jenna Weiss-Berman, and Recode’s Kafka.
While advertising remains a lucrative source of revenue, Polgreen said the medium needs some advancement in terms of measurement and audience-based selling, similar to other formats. Weiss-Berman added that the mechanisms for connecting ad buyers with content creators need development. Both agreed that there is still tremendous room for growth. The next big challenge will be reaching people who don’t currently listen to podcasts.
“If you look at the research, podcast listening has tripled since 2014, in terms of share of time, but only from 2% to 6%,” Polgreen said. “In a world where audio is completely on-demand, the possibilities are pretty endless.”
The future of media and journalism
Elsewhere on the program, Snap CMO Kenny Mitchell and Clubhouse CEO Paul Davison each explored growth strategies for their respective platforms. They also touched on the importance of creator relationships and the intersection of content and community.
Julia Angwin, editor-in-chief and founder of The Markup, took attendees behind the scenes of The Atlantic’s highly successful COVID tracking project. Staff writer Alexis Madrigal, who co-founded the project, reflected on the many challenges involved in merging numerous disparate sources of data to meet a critical need for information in the early months of the pandemic.
Angwin noted that the project exemplifies the tangible benefits that journalistic endeavors can provide to the public, particularly when providing information that might be “politically inconvenient.”
On the final day of the Summit, Stacy-Marie Ishmael, editorial director at The Texas Tribune, led a lively conversation with 2PM Inc. founder Web Smith and The Washington Post’s VP, commercial, Jarrod Dicker, on the future of media. In line with the trends, the discussion largely focused on the rise of independent creators.
“Twitter and other platforms have enabled individual people to build their own reputation. It’s created an entirely new landscape,” Dicker said. “Creators can see what their individual value is. I think that’s a change in the discourse.”
New year, same values
In closing, Kint said that, despite adapting well to a virtual event, he hoped to see everyone back in Miami for the 2022 DCN Next: Summit. In the interim, he advised those in attendance to focus on three key things: strengthening bonds with audiences and partners, understanding the core needs of both, and emphasizing agility in response to change.
“Every member of DCN has a direct and trusted relationship with their users and advertisers,” he said. “Our Summit is the one place where, in the comfort of a closed-door environment, surrounded by others who share our values, we can also share our successes and vulnerabilities.”
Public policy debates over consumer privacy and platform liability will feature prominently in 2021. Some are even hopeful that policymakers can reach bipartisan agreement on solutions. These are two important issues that I want to explore. However, I wonder if they aren’t the byproduct of a bigger problem.
Consumer privacy: A policy patchwork
One could argue that the digital advertising industry has been “regulated” (even if enforcement was less than robust) since 2010 when the industry’s self-regulation group, the Digital Advertising Alliance (DAA), rolled out its AdChoices program. In 2018, Europe began enforcing the General Data Protection Regulation (GDPR). In 2020, the California Consumer Protection Act (CCPA) came online followed by the November passage of the GDPR-like California Privacy Rights Act (CPRA).
Against this alphabet soup of patchwork regulation, we may be reaching a tipping point. For one thing, more states are expected to pass consumer privacy laws in 2021. Even with pandemic-altered legislative calendars, 16 states nearly passed laws in 2020.
Additionally, Congress has held countless hearings over the last two years to investigate big tech’s massive data collection operations. Those hearings are sometimes painful to watch, but they are serving to educate members of Congress, who appear to be much more knowledgeable now than they were a few years ago. (Remember when one of them asked Mark Zuckerberg how Facebook makes money? Oy.)
As further evidence of an increasingly savvy Congress, there is a bipartisan group of Senators quietly negotiating to craft a national consumer privacy framework. From what I’ve seen and heard, their approach is fairly solid. With slim Democratic majorities in both houses of Congress, this kind of bipartisan approach is the only way that any meaningful privacy law can get passed. However, the deck may be stacked against them. It is difficult to move major legislation with slim majorities in the House and Senate because the margin for error is very small.
All that said, the California laws (CCPA and eventually CPRA) are likely to serve as the de facto national standard. Many companies already apply those laws nationwide, not just for California residents. Besides which, most of the big tech giants are based in California. While the CCPA was a strong first law designed to give consumers more control over how their data is collected and used, CPRA is directly targeted at curbing Google and Facebook’s massive data collection and profiling operations.
GDPR has a similar focus. However, Google and Facebook have employed creative compliance strategies that have allowed them to temporarily evade a direct hit to their businesses. The big question is whether European and California regulators can force big tech companies into finally complying with the spirit of these consumer privacy laws. Fines are fine. But the laws were actually intended to empower and protect consumers.
Section 230: A Tale of two parties
Often referred to as “The 26 Words That Created The Internet,” Section 230 became a target of both political parties in 2020. Prominent Republicans and Democrats — including each party’s Presidential nominee — have called for the elimination or massive overhaul of Section 230. And yet Congress is not all that close to resolving anything.
The problem is that each party’s concerns lead them to propose different solutions. Democrats and Republicans both agree that big tech platforms have too much market power. Hence, the flurry of antitrust lawsuits filed by a Republican Department of Justice (and likely to be carried forward by a Democratic Department of Justice) and a bipartisan flotilla of state attorneys general.
With regard to Section 230, however, Democrats criticize tech platforms for not taking action quickly enough to combat disinformation, harassment, and demagoguery. Republicans, on the other hand, allege that big tech companies use the legal shield of Section 230 to suppress conservative speech. Essentially, Democrats want tech companies to do more while Republicans want tech companies to do less. These fundamentally different viewpoints are likely to make it difficult for Congress to agree on any big changes to Section 230.
Big picture, bigger issue
What’s interesting to me is that the public policy debates around consumer privacy and Section 230 are largely driven by dominance and anticompetitive behavior of big tech companies. I wonder if we would even be having these debates if Google and Facebook faced meaningful competition.
The aforementioned alphabet soup of consumer privacy regulations was developed to address consumer concerns about the ubiquitous and non-transparent collection of consumer data for use in behaviorally targeted ads. The two most dominant players in the digital ad industry, Google and Facebook, have built massive ad targeting businesses (basically the digital equivalent of junk mail), which are fueled by the collection of consumer data across the web and our lives. The duopoly, as we have called Google and Facebook for years now, accounts for 70 to 80% of the growth in the digital advertising marketplace. Much of this advertising is delivered on their own properties regardless of where they mined the data.
With regard to Section 230, the original intent of the law was to incentivize companies for making “good faith” actions to clean up their services. However, without meaningful competition among digital platforms, those companies are merely incentivized to protect themselves against legal action as opposed to competing for consumer loyalty.
Anticompetitive by design
Imagine a world where Facebook and Instagram were separate companies competing for consumers. I think they would be vying to prove which company would be the best at snuffing out disinformation, stamping out illegal activity, and generally providing the most trustworthy service.
Significantly, when Facebook was first launched, it touted a super strong set of privacy protections and controls to differentiate from the established market players at the time. But not now. The “like” button was originally designed as a user signal to show content interests. It has become an opaque means to track people’s movement around the web. Facebook’s business model is so reliant on tracking users it ran a national ad campaign last month to publicly pressure Apple to blink on its plan to restrict the use of its advertising identifier (IDFA). And let’s not forget that Facebook only reluctantly and belatedly de-platforms hate groups and removes disinformation.
If there was meaningful competition, big tech platforms would behave very differently within the industry and for consumers. The latest bit of evidence that Google and Facebook agreed to cooperate rather than compete with each other was particularly appalling. The two dominant players in digital advertising decided to carve up the market for themselves while icing out everyone else. The fact that the agreement exists at all is quite amazing. Perhaps more amazing is that these two companies had enough chutzpah to even engage in the negotiation in the first place. In many ways merely confirmed what many industry insiders already suspected. It’s the Duopoly’s world and we’re just living in it.
While we engage in meaningful and important debates about consumer privacy and the responsibilities of companies in a digitally-dominated world, let’s not lose sight of the fact that the competitive landscape is heavily tilted in favor of the big tech companies. The antitrust lawsuits and regulatory scrutiny faced by Google and Facebook are hugely important for restoring a heathy dose of competition, which could alleviate some of the downstream public policy concerns.
In a recent meeting between the Vice President of Values and Transparency of the European Commission, Věra Jourová, and Twitter CEO Jack Dorsey, they agreed there should be more focus on how harmful content is distributed and shown to people rather than pushing for its removal. This is a critical point to be made as we enter what may be the most volatile two months in the history of our still-young democracy.
If you don’t know Jourova, she speaks from experience on civil liberties and technology. Time Magazine named her to their Time 100 last year and she played a key role in passing the General Data Protection Regulation (GDPR) privacy law in Europe. Her responsibility now includes watching over democracy and election integrity. Meanwhile, Jack Dorsey continues to lead Twitter bravely, despite the political risks. Facebook, however, continues to play a game of public relations to the continued detriment of our democracy.
Facebook would prefer that our public debate is focused on issues of free expression. Mark Zuckerberg can then symbolically drape the American flag around his shoulders and remind us of the importance of free speech while his company sidesteps the perils of his platform and dodges the thorny issues at the core of Facebook’s profit model. In positioning his argument this way, Zuckerberg creates a false case in which supporters of our democracy’s rights to free expression must agree with him.
In this crowded theater, he not only defends the right to yell fire, and light them, but also algorithmically fans the flames so that they spread out of control. And he does so in the very same month his company threatens to block all news in Australia and to withdraw from Jourova’s Europe because their profit model can’t survive regulation, quite a statement if you stop and think about it.
As long as Facebook focuses on the issue of content take-down versus minimizing or stopping its amplification, they avoid the real issue with the platform. And they are certainly not going to solve it.
Yes, Facebook has become an essential utility to the world despite harboring toxic sludge
This is undeniable by all parties, including Facebook. And it makes any scaled changes particularly sensitive. This is particularly problematic and impactful in nations without the freedoms of our First Amendment. For these countries, the positives and negatives of Facebook are even more pronounced. And reports of Facebook cozying up to authoritarian governments are even more troubling.
We need to stop debating the “censoring” of posts on Facebook
First, Facebook is a private platform and has the right to make decisions to remove or promote posts as it sees fit according to the (you guessed it) First Amendment. More importantly, press advocates should be uncomfortable with Facebook outright removing posts unless the information presents an immediate danger to the public. If a Facebook user wants to follow and read Alex Jones or Michael Moore, then so be it. If Jones, Moore, or any of these folks want to share questionable content (within limits around issues like inciting violence and hate) then so be it. These are consumer choices and the statements from Jourova and Dorsey imply they tend to agree.
Reach = velocity x amplification
It has been frequently stated in policy circles that “freedom of speech” does not equal “freedom of reach.” This is a clear and elegant way to illustrate how protections from the government or from Facebook’s freedom of expression bear hug should stop at the company’s decision to permit a post to survive on its platform. It is too simple an analysis to treat Facebook merely as a “platform” or the “21st century town hall.” Being neutrally available to all is one thing. However, it does not account for the platform’s design decisions that impact whether posts spread like wildfires or fade into history. The average person’s posts simply don’t travel like a Trevor Noah monologue or a statement from President Trump, and it’s not simply because they have more “friends.”
This is the argument presented in an important 2019 UK report positing why Facebook is neither platform nor publisher. If Facebook were the traditional model of a publisher, it would commission, pay for, edit, and take responsibility for the content it disseminates. And there are already renewed efforts to make social media carry more liability. At the same time, it’s not merely a platform because it continually changes what is and is not seen, based on algorithms and its own employees’ human intervention. If it were purely a platform, then that liability protection it receives may be warranted.
Content targeting suppresses counter-speech
An influential academic compared Facebook’s “black box” algorithms to the opacity of newsrooms in the 80s. This is disingenuous. As we stated in our 2016 letter to the CEOs of Facebook and Google, no one is arguing we should return to the world of gatekeepers and information scarcity. Any comparison to newspapers and television falls apart here as social media platforms reinforce a cycle of regurgitated bias. We must recognize these same algorithms that microtarget content to individual users also serve to suppress the counter-speech to that same content. No news media is entirely immune to bias. However, balance, integrity, and responsibility to the public are fundamental to their success. And consumer feedback and choice offer a sufficient check on this.
So, when Facebook executives share platitudes about free expression in response to harmful content, please ask why they continue to actively promote this harmful content with their algorithms. When they describe the labels they had to be pushed into affixing to harmful content, please ask why they haven’t gone a step further and put these labels on the harmful posts before they’re even visible to users. When they describe their reluctance to censor individuals, please ask why they haven’t removed the tools in their product that allow for massive amplification of questionable content. In a message all too familiar to us in 2020, we need to stop the spread
It’s no surprise that social media is where consumers, especially young adults, get their news today. According to a new study from Pew Research Center, 18% of U.S adults use social media for political and election news. Unfortunately, this cohort lacks both the depth and breadth in terms of political news.
This study finds limited exposure to election news for those using social media as their political news source. Only 8% of consumers who receive most of their political news from social media report that they are following 2020 election news “very closely”. Both broadcast news followers and print news followers are more than three times more likely to follow election coverage closely (37% and 33%, respectively) than social media news followers.
Further, consumers who mainly use social platforms as their main source for political news appear to pay much less attention to news in general compared followers of other news sources. In fact, those who get their political news from cable TV are nearly twice as likely as those in the social media cohort to follow (“fairly closely”) election candidate news (70% vs. 36%).
This Pew report also analyzed data from previous studies, such as Pathways & Trust in Media. This provides insight into how well consumers that use social media as their main news source are informed on current political news. The analysis found that those in the social media group were among the least likely to have heard a lot about each of six current event stories. Further, they were also among the most likely to have heard no news on each of the six events.
Another analysis in this report identified 29 fact-based news-related questions in five Pew studies across the last nine months. The questions included topics on the presidential election, the economy, the political parties, Donald Trump’s impeachment, and the coronavirus outbreak. Pew’s analysis of the 29 questions shows a high correlation between the least accurate responses and consumers who rely mostly on social media for political news.
On average, 43% of social media news consumers answered correctly compared to 63% among those who rely mostly on news websites or apps and 56% among those who turn mostly to network TV. The only comparable group to social platforms are adults who watch local television (37% of correct answers).
Unfortunately, consumers who rely on social media as their main source of political news are more likely to be exposed to false information. In fact, about a quarter of U.S. adults who get most of their news from social media are more likely than others to hear false information or unproven claims. With multiple conspiracy theories centered around the Covid-19 pandemic, about a quarter of U.S. adults who get most of their news through social media report that they heard “a lot” about Covid-19 conspiracy theories. In addition, about eight-in-ten (81%) report they heard at least “a little.” Hearing conspiracy theories is much higher among those who use social media than among those who use any of the other six platforms (broadcast TV, cable TV, print, etc.) for their political news
The Pew study shows that those who rely on social media as their main source for political news tend to be less well-informed. Social media does little to help educate their users how to identify trusted news sources. Consumers need help to navigate through feeds where premium and trusted news brands are right alongside misinformation and disinformation.
Publishers establish editorial guidelines to provide a common foundation for journalists, creators, and producers. These guidelines provide a common language to identify an editorial framework and boundaries. Guidelines are often questioned, reevaluated, and updated in a process that allows the editorial voice to evolve over time. Editorial guidelines also present a check and balance system for standards and content moderation, which in turn creates a safe space for advertising.
Unfortunately, the editorial guidelines of social media platforms appear to be a complex maze of mixed messages. This results in unhappy content producers – given opacity around monetization of some content – and risky business for marketers.
One problem is that they frequently include different governance strategies for different creators. The result is that creators struggle to remain inside the viable boundaries for monetization.
YouTube offers a partnership program for content creators and shares advertising revenue with them. Plain and simple, the more user-generated content, the more views, which allows YouTube to collect user data to support targeted advertising. And the more ads served, the more revenue generated.
The YouTube Partnership Program (YPP) is intrinsic to its revenue model. Essentially, it is a form of unpaid labor that generates enormous revenue for the platform. YouTube’s partnership program encourages users to make more content with an offer of compensation as ads run against that content.
YouTube’s content creators vary from amateurs, professionalized amateurs, legacy media organizations, and YouTube’s contracted producers of original content. One particularly tricky aspect is that each is held to held to a different standard and entitled to a different monetization offering.
A few other social platforms developed similar programs, but YouTube’s is by far the largest. According to Caplan and Gillespie’s research, YouTube’s lack of clarity and complicated rules appears to do little in the way of effective content moderation or fair compensation. Further, the fact that YouTube works with multiple creator tiers fuels issues of inconsistent treatment. Consider how editorial standards would impact:
amateur creators who are not dependent on revenue,
creators who are dependent on revenue
professionals building their reputation for secondary distribution, or
media institutions who partner for distribution power.
YouTube’s policies vary, with little explanation, in dealing with content that violate their standards and practices around sexual content, violence, harassment, hate speech, or misinformation. Actions include content demonetization, removal of individual videos, or the suspension of entire accounts. They might also place videos behind age barriers or include interstitial warnings indicating graphic content. YouTube may also remove videos deemed to infringe on copyright infringement, violate privacy, or simply spam.
Caplan and Gillespie summarize the YouTube problem:
YouTube stated values as an open platform of expression are in direct conflict with their cautiousness regarding acceptable content and the financial and algorithmic incentive structure.
YouTube’s governance offers a different set of rules for different users. These include different material resources and opportunities for creators to different procedural protections and different expectations of fairness.
Given the ambiguity in the guidelines, creators develop their own theories as to why their content is demonetized.
Caplan and Gillespie offer examples of YouTube’s randomness in enforcing its standards. For example, it appears that YouTube determines participation in YPP based on an algorithmic mix of popularity, engagement, and propriety. However, according to many independent content creators, YouTube also allows inappropriate content to circulate and amplify based on popularity and the ability to generate revenue for the company.
YouTube’s participatory video culture does not bode well for advertisers’ demand for quality and predictability. It appears to be a system based on rewarding audience size and celebrity stardom. Unfortunately, given this revenue model, when YouTube adds new layers to its already complex labyrinth of standards, it fails its creators and fails to effectively moderate content.
Content moderation is a big job for social platforms (i.e. Facebook, Twitter, YouTube). Much of the content posted on these sites is made up of user-generated postings, pictures, and videos. However, there is a sizeable amount of content from rogue publishers to contend with as well. At present, it is common practice for social platforms to outsource the role of content moderation.
NYU Stern Center for Business and Human Rights’ new report, authored by Paul Barrett, focuses on Facebook and questions its strategy of outsourcing content moderation. Facebook, by far, outsources the largest number moderators of all the tech platforms. Barrett makes the case that content moderation is central to social platforms’ business. Thus, as with other core business functions, Facebook should make this vital role that of a full-time company employee. To improve moderation, Barrett contends that Facebook must bring moderators in-house and increase the number from the current 15,000 to 30,000.
According to Barrett, Facebook’s outsourcing was a purposeful decision. Their reasoning goes beyond the cost of bringing these employees in house. It is also a logistical – even strategic – decision to outsource moderation.
Sarah Roberts, an expert on content moderation at the University of California, comments that Facebook and other social media companies outsource moderation to minimize its level of importance. Roberts’ refers to this as “plausible deniability.” She explains, the work is mission critical. However, your full time employees don’t handle it directly. The company intentionally places a physical distance between the problem and its staff.
Facebook contracts third-party vendors, which hire temporary workers located at 20 sites worldwide. With the assistance of AI, this army of contract-moderators sift through approximately three million posts a day.
Facebook’s core business model centers on advertising. The goal of the platform is to add new users, to increase scale. This drives advertising revenues, which demonstrates growth to Wall Street. However, there is an intrinsic problem here. More users create ever more content to be moderated. This puts moderators on a hamster wheel with no end in sight.
With such a high volume of moderation, and a minimal level of content expertise, errors occur. Mark Zuckerberg, Facebook’s CEO, quoted a 10% error rate in flagging content that should be taken down or taking down content that should not be flagged. Given this margin, there are at least 300,000 mistakes each day. Some of these are very serious mistakes.
What are the repercussions of postings remaining online when they should be taken down? There’s little risk for Facebook, at least within the U.S. Due to the Section 230 of the Communications Decency Act of 1996, internet platforms are protected from liability for most content posted by users. Even with President Trump’s recent executive order to roll back Section 230, tech company protection from liability still appears in place.
Unfortunately, there are consequences for the workers. In a recent class action lawsuit against Facebook in San Mateo County, California, a group of former reviewers claimed that “as a result of constant and unmitigated exposure to highly toxic and extremely disturbing images,” they had suffered “significant psychological trauma and/or post-traumatic stress disorder.”
Facebook agreed, without admitting any liability, in May 2020 to settle the suit. The settlement could distribute millions of dollars to more than 10,000 current and former moderators in the U.S. (a minor slap on the wrist for a company that earns more than $70 billion dollar per year).
Importantly, there are also serious consequences for multitudes Facebook users as well. One such incident identified members of the Myanmar military as operatives behind a systematic campaign on Facebook to target the country’s Muslim Rohingya minority group. Facebook was warned of anti-Rohingya rhetoric and false claims posted on their platform (which were not removed). The United Nations and others blame on Facebook for the murder of more than 10,000 Rohingya Muslims in Myanmar and the displacement of hundreds of thousands more. Facebook eventually concluded there was a deliberate and covert Myanmar military operations but again there was no liability on their part.
Incidences like those in Myanmar also happened in Sri Lanka, Indonesia, Ethiopia, as well as other places.
Time for change
Barrett concludes his report with eight recommendations to change Facebook current practices:
Stop outsourcing content moderation and bring the process in-house.
Increase the number of moderators from 15,000 to 30,000 to improve the review process.
Hire someone to oversee content and fact-checking who reports directly to the CEO or COO.
Expand moderation in at-risk countries in Asia, Africa, and other areas.
Provide all moderators with quality medical and psychiatric access.
Support research on the health risks of content moderation.
Explore targeted government regulation regarding harmful content.
Expand fact-checking to discredit false information.
Companies routinely outsource areas outside their expertise. However, this practice is far from common for areas core to their business. Content moderation is an area of expertise that Facebook needs to possess, and excel, in for the safety of its employees and Facebook users worldwide.
As the world adjusts to the “new normal” of remote working life, forward-thinking publishers have been coming up with new ways to connect with their audiences and help them through the crisis. In a matter of weeks, Harvard Business Review has spun up its own live video offering: HBR Quarantined.
HBR’s new weekly LinkedIn Live show focuses on how businesses are coping with the consequences of coronavirus. The show, co-hosted by Editor in Chief Adi Ignatius and Chief Product and Innovation Officer Joshua Macht, debuted on April 27 with Pulitzer Prize-winning columnist Thomas Friedman as a special guest.
Ignatius, Macht and HBR’s Senior Multimedia Editor Scott LaPierre talked to DCN about what prompted the launch, how the first show went, and where they plan to take it in the future.
Evolving an idea
The initial concept for HBR Quarantined stemmed from Ignatius and Macht wanting to explore their dynamic in different formats. “Adi and I go way back together. We’ve grown accustomed to taking chances together and inventing things,” Macht said, explaining that a podcast was initially on the table. “Within weeks we went from, ‘Maybe we should launch a podcast,’ to ‘We’re going to do a live television show on a platform that’s pretty new.’ Then, all of a sudden, we had a show.”
Ignatius said that the genesis of the idea came from a desire to connect with the millions of their audience who are now working from home. “They, like us, are wondering, ‘When do we get to go back to work, and what will work look like when we do?’” he explained. “We’re always talking about these issues. So we figured we could do a service delivering insight on COVID-19 and how it affects businesses and the economy.”
But unlike other HBR products, the show is designed to have a very different tone. “Harvard Business Review tends to be a brand that speaks to a very high altitude. That’s our secret sauce: high-level pieces that are based on research,” Ignatius emphasized.
“This show is something different. It’s meant to be warmer, really connecting in the moment. We’re all in the same boat and trying to figure this out together. So, it’s certainly an experimentation with a different kind of voice for us.”
Viability in quarantine
Under normal circumstances, a product like this would be resource-intensive. But LaPierre highlighted that quarantine has actually lowered the bar for everyone in terms of production values and expectations.
“The way a lot of video producers are seeing the COVID-19 crisis, perversely, is as an opportunity to try new things,” he said. “HBR is not a TV station. We only have a small video team. So it would be hard for us to launch a true broadcast live video series. But now, everyone’s been equalized in terms of what they’re capable of doing. It’s a chance for us to make a viable series that doesn’t look that different from what others are doing.”
LinkedIn’s Live tool is just over a year old, and the platform was relatively late to the video space compared to its competitors. But for HBR, their vast social following on LinkedIn – 10.2 million followers – made it an obvious choice to debut this type of show.
The team began by testing out a high-level broadcast tool. However, that was proving problematic as it wasn’t suited to their purposes. “We pivoted to something called StreamYard, which is a ‘prosumer’ grade software that allows you to stream live, but is a lot more lightweight,” explained LaPierre.
Live streaming can be risky in terms of technical hitches. But HBR’s first show went smoothly, attracting 35,000 live viewers and thousands of comments during the stream. Ignatius highlighted the long-tail benefits of the video as well, with total views doubling in just a few days.
The biggest surprise for the team was the lack of drop-offs. “Everyone was saying we would have these spikes in viewers. But actually, people showed up for the whole thing, and it just kept growing,” Macht explained.
HBR Quarantined post-quarantine
When it comes to the future of HBR Quarantined, the team is remaining flexible. They have a total of six episodes planned so far. However, they will be constantly reviewing what the response is to them and what their audience needs going forward.
“I was pleasantly surprised that it went off as well as it did. But it will be interesting to see where it goes,” commented Ignatius. “I think there is something of a service that we can provide for our readers. There’s knowledge and insight about what’s going on, and we want to see what that means post-quarantine.”
HBR is also scouting out potential sponsors. They believe the show offers a timely opportunity for advertisers to reach their audience with messaging related to the moment. “There is not a lot of sponsorship money out there these day. And part of our experiment was to find a new medium that was of the moment,” explained Ignatius.
But sponsorship aside, future episodes will be focused on trying to engage people with the brand, and with the wider goals of bringing people into HBR’s subscription funnel. “The show is good for getting people to engage with our brand, and we want to continue to grow the number of people visiting the site,” Macht concluded.
Mark Zuckerberg is reportedly “begging to be regulated.” He has made several statements to that effect, suggesting that he supports state-backed regulation in four areas: elections, political discourse, privacy, and data portability. This week, Facebook released a white paper that outlines the company’s suggested path forward for content regulation. It does little to drive forward meaningful discussion around these serious issues. Instead, it clearly illustrates that Facebook would like to eschew responsibility for some of its profoundly negative effects in these areas while protecting its ability to continue business as usual in others.
As Zuckerberg put it in a recent op-ed, “regulation could set baselines for what’s prohibited and require companies to build systems for keeping harmful content to a bare minimum.” That said, Facebook’s plan suggests that companies like his should be required to have procedures for taking down offensive or illegal posts and to prepare quarterly reports on their efforts. If this isn’t the absolute bare minimum, I don’t know what is.
interesting that Facebook’s paper cites the European Convention on Human
Rights which supports the needs of governments to regulate speech for “[T]he
interests of national security, territorial integrity or public safety, for the
prevention of disorder or crime, for the protection of health or morals…”
Some have argued that Facebook’s algorithms and data targeting capabilities, the same tools which have been refined and optimized for immense profits, actually encourage these sorts of activities. The argument goes like this: The algorithms promote content that gets clicks. The content that gets the most clicks is often salacious (yes, even to the point of being false) or highly targeted based on consumer data.
In this way, drug dealers target members of addict recovery groups, for example. Or false stories that reinforce specific bias, racism, or extremist views are targeted at specific individuals. Of course, those Facebook functions would remain very lightly regulated under the company’s proposal.
his op-ed, Zuckerberg said he believes that “good regulation may hurt
Facebook’s business in the near term but it will be better for everyone,
including us, over the long term.” However, Facebook’s proposal clearly brings
with it the added benefit (for Facebook) that the suggested regulations would
build barriers and add compliance costs that entrenched companies are best
equipped to overcome. It’s already a
daunting challenge for a start-up social media company to break through in the
current marketplace. This proposal would only increase that burden.
In what is quite possibly the icing on Zuckerberg’s cake, the proposal would grant global immunity to big tech platforms to carry out content moderation policies, which could also be used to shut out competitive services on their platforms. The proposal would essentially reinforce Section 230 of the U.S. Communications Decency Act globally, so that Facebook could claim legal protection for removing offensive posts and adjusting algorithms to disadvantage repeat offenders.
The last thing
the world needs exported from America is blanket liability protection for
Facebook. But Zuckerberg’s proposed immunity is intentionally broad to allow
for all kinds of activity under a content moderation policy, including favoring
its own services or certain business partners over others.
Facebook’s proposal includes an entire section about how – even
with the proposed content moderation – enforcement of any guidelines will not
only be a challenge; it will be imperfect.
Imagine if Instagram were still a separate company. Do you think
Zuckerberg would be talking about how it’s too hard to scrub racist and sexist
content from the service? Do you think he would offer only muddled, hazy
responses about whether foreign governments manipulated his service to impact
elections? Do you think he would be complaining that it’s hard to prevent drug
dealers from targeting recovering addicts? No. He would be taking aggressive
action to clean it up.
In fact, there is documented evidence that Facebook cared a lot more about these issues when they had competition. Acquiring WhatsApp and Instagram were likely a cheaper alternative to changing Facebook’s underlying and unhealthy business. Zuckerberg’s calculation could have been that turning his back on the public’s welfare was more acceptable than turning his back on shareholders.
The compliance question
As Facebook’s own report puts it, “Designed poorly, these efforts
may stifle expression, slow innovation, and create the wrong incentives for
platforms.” And Zuckerberg should know.
While Zuckerberg calls for “regulation,” it’s important to consider Facebook’s track record of complying with existing regulations. Their approach to the EU’s General Data Protection Regulation (GDPR) is disingenuous and sometimes misleading. In California, which recently rolled out the CCPA, Facebook is making case the that they don’t “sell” consumer data even under California’s very broad definition.
And that’s the point: Facebook is a for-profit company with
responsibility only to its shareholders. Zuckerberg isn’t offering a real plan
for regulation that will benefit society and democracy. He’s offering a plan to
minimize the obligations and responsibility for Facebook. And this is why the
best perspectives on how to clean up the mess that is Facebook are coming from
outside of Menlo Park.
In the past, encouraged by a strong economy, consumers found core societal institutions, government, business, NGOs and media, both competent and of high ethical standing. Unfortunately, according to the 2020 Edelman Trust Barometer, this sentiment is no longer true due to the rise of violence, government corruption, fake news, financial insecurity, and other unsettling conditions. Today’s consumer does not trust the government, businesses, NGOs, or media.
Edelman defines trust based on a combined measurement of these
two distinct attributes: competence (delivering on promises) and ethical behavior
(doing the right thing and working to improve society). The Edelman Trust
Barometer is based on an online survey sampled of more than 34,000 respondents
across 28 markets.
Edelman reports a significant imbalance of trust, a 14-point gap, settings a new record among the informed public and the mass population. The informed public, defined by Edelman as a wealthier, more educated and trusting consumer cohort, are far more trusting of every institution than the mass population.
Within the media sector, search engines and traditional
media (newspapers and broadcasting) are equally trusted at 61%. Social media,
on the other hand, is the least trusted media source at 39%. And in developed
countries traditional media outperforms social platforms by 30 points on
Social platforms continue to fuel the growth of distrust in media.
While smartphones offer the power of communication in the user’s hand, this
accessibility also accelerates the flow of user generated content and comments.
Unfortunately, social amplifies and exaggerates false and misinformation and
magnifies filter bubbles of extremism. It’s no wonder that three-quarters of
respondents (76%) said that they worry about false information or fake news
being used as a weapon.
For the first time this year, the Edelman Trust Barometer asked respondents to rate if an institution is doing well to very well on different issues. Related to this was a question to gauge the potential impact on trust associated with each issue. Media companies scored lowest on five issues: keeping social media clean, being objective, information quality, important vs. sensationalized content and differentiate opinion and fact. However, if media companies perform better on these issues, it will drive significant consumer trust.
Insert To build trust chart
It’s important for premium publishers, when working with
social platforms, to ensure their brands are clearly differentiated from the platforms where their content appears.
Further, publishers should continue to find new ways to build trust,
especially on the issues with the most potential to growth in trust. Because,
given their direct one-to-one relationship with consumers, trust is a critical
factor for publisher success.