FTC Chair Lina Khan will be interviewed by Axios’ Sara Fischer for the 2024 DCN Summit. The session will be livestreamed.
Category results for "Policy"
Policy
The future of journalism – defining copyright in the age of AI
On January 10, 2024, the Senate Judiciary Committee’s Subcommittee on Privacy, Technology, and the Law held a hearing titled “Oversight of A.I.: The Future of Journalism,” kickstarting legislative activity on AI for 2024. The central question of this hearing wasn’t whether copyright law covers AI, most witnesses and members of Congress seemed to agree that it does, it was whether existing law properly and effectively protects AI’s infringement on the intellectual property of journalists. As Subcommittee Chairman Senator Richard Blumenthal (D-CT) stated, rights need remedies, and for these remedies to be effective, they must be enforceable. It was that effectiveness and enforceability that was the true centerpiece of this Congressional discussion.
The witnesses at the hearing were: Danielle Coffey, President and Chief Executive Officer of the News Media Alliance, Jeff Jarvis, Tow Professor of Journalism Innovation at the CUNY Graduate School of Journalism, Curtis LeGeyt, President and Chief Executive Officer of the National Association of Broadcasters and Roger Lynch, Chief Executive Officer of Condé Nast.
For senators, a sense of urgency
During his opening statement, Senator Blumenthal (D-CT) highlighted the importance of this subject and this hearing, touting it as critical to democracy. Careful not to vilify the possibilities awarded by AI, Senator Blumenthal argued it is essential for reporters and readers to be able to reap the benefits of AI while avoiding its pitfalls. Nonetheless, he clearly called out how the rise of big tech and generative AI has led to the decline of the news industry, with the hard work of authors being utilized without credit or compensation.
Evident in Senator Blumenthal’s remarks was a sense of urgency, as he expressed that it was essential that Congress learn from their mistakes in tackling social media. He also floated several areas of consensus around the topic of AI, such as licensing, transparency, incentive structures for companies to develop trustworthy products, limiting big tech’s monopolistic practices when it comes to advertising, and clarifying that Section 230 does not apply to AI.
As a refresher, Section 230 of the Telecommunications Act of 1996 states that “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” Since coming into effect, Section 230 has granted websites and social media companies immunity from liability for content posted on their platforms by others.
It is no surprise that several of these areas of consensus are present in legislative proposals introduced by Senator Blumenthal. In 2023, he, alongside Subcommittee Ranking Member Senator Josh Hawley (R-MO) introduced the “No Section 230 Immunity for AI Act”as well as an AI Legislative Framework which tackled licensing regimes, transparency, and trustworthiness.
In his opening statement, Senator Hawley echoed Senator Blumenthal’s sense of urgency in protecting the work product, data, and information of consumers, at a time when the largest tech companies attempt to monopolize these areas.
For witnesses, a (somewhat) clear solution
Across the board, the hearing’s four witnesses illustrated the invaluable contributions the news industry has made to society. Danielle Coffey, Curtis LeGeyt, and Roger Lynch all agreed that licensing agreements are an essential component in combating the risks AI poses to the industry.
Coffey highlighted that such agreements could help avoid protracted uncertainty in the courts, while LeGeyt and Lynch raised how licensing agreements have become standard practice in the music, radio, and local television industries. Jeff Jarvis was more optimistic about the positive use cases of AI in the industry and advocated for the measured embrace and implementation of AI in journalistic practices.
A fork in the road for the industry
Following witness testimonies, committee members expressed their support of licensing agreements as a solution to some of the copyright issues raised by the interaction between AI and the news industry. Even more so, several committee members expressed their eagerness to tackle the issue directly and immediately.
Senator Mazie Hirono (D-HI) inquired whether Congress needed to enact legislation for these kinds of licensing procedures to be implemented, while Senator Blumenthal stated that when it comes to both licensing and Section 230 issues, Congress has an obligation to clarify current law, ensure that licensing is legally required and reinforce the inapplicability of Section 230. Somewhat surprisingly, it was some of the witnesses who pumped the legislative breaks on these comments. Regarding Senator Hirono’s comments, LeGeyt argued that such Congressional action would be premature while Coffey stated she believed the industry would prevail in addressing these issues through pending litigation.
What is undeniable, is that 2024 is set to be a landmark year for Congressional action on AI, and that copyright issues offer legislators a path to AI “victory” that is targeted, discreet, and not overtly controversial. Because of this, regardless of what was advocated for in this hearing, members of Congress can be expected to at the very least attempt to “clarify” the applicability of existing copyright law to generative AI models. Of course, the distinction between a limited clarification of current law and the outright enforcement of these types of agreements is up to legislators.
While witnesses adamantly made the case that copyright law is on their side, legislators continuously expressed concerns with the efficacy of existing protections. Going back to Senator Blumenthal’s statement, about rights needing remedies that are effective and enforceable, participants agreed that the rights of journalists certainly exist in copyright law, but for legislators, efficacy and enforceability need an extra push from Congress to come to fruition.
Looking towards 2024, with copyright litigation in its nascent stages, the digital content industry may certainly find relief in the legal system but would be wise to hedge some of its bets in the hands of legislators who seem keen on engaging with this industry-defining issue.
Apple flouts ruling to flex its monopoly power
Big tech monopolies face a regulatory reckoning right now. One area of intense scrutiny is the ability of these companies to exert their market dominance in ways that extract sky-high profits while limiting the ability of other companies to sufficiently monetize their offerings. The four-year legal battle between Epic Games and Apple vividly illustrates these issues. Of course, Epic is far from alone in believing that Apple’s stronghold on the app marketplace is what allows it to extract a 30% cut from every in-app purchase.
Earlier this month, a federal court ruling officially asserted that Apple had violated California competition law by limiting the ability for app developers to point consumers to alternative payment systems that offer lower prices and fees. But almost immediately, Apple announced a brazen new approach to extend the charging of exorbitant fees on purchases even when those purchases are made outside of Apple’s walled garden – in direct contradiction with the spirit of the court’s ruling.
Once a company known for breathtaking, innovative technological advancements, this is now a company laser-focused on maintaining its stronghold on the app market. While we might see little else to applaud from Apple, we have to marvel at the breathtaking chutzpah of its legal team. At the same time, the ability of one company to manipulate the market – and the legal system – so openly demonstrates the need for a stronger legal framework.
Epic battle
Epic’s battle against Apple (and Google) has come to epitomize app developers’ struggle to sustain their businesses given that the distribution marketplace is so heavily weighted in the favor of big tech. Since 2015, Epic Games’ founder and CEO Tim Sweeney has questioned the need for digital storefronts to take a 30% revenue share cut. He reasoned that this not only unduly affected the ability of developers to monetize their offerings, but that it drove price increases that ultimately impacted consumers.
In 2020, when Epic first filed suit against Apple, Sweeny pointed out that “Apple has locked down and crippled the ecosystem by inventing an absolute monopoly on the distribution of software, on the monetization of software.”
In 2021, US District Judge Yvonne Gonzalez Rogers ruled in favor of Epic that Apple’s policies prevented consumers from getting cheaper prices and, thus, violated the California Unfair Competition Law. Then, in 2023, the Ninth Circuit Court of Appeals affirmed the ruling. When the Supreme Court recently refused to take the case, the ruling became final.
Monopolists gonna monopolize
This legal ruling seems to have done little to dissuade Apple from its monopolistic practices, however.
Last week, Apple announced they would charge a 27% fee on all charges outside of their payment system and 12% on all recurring charges. That’s a 3% reduction from the fee they take on charges within their payment system. Given that third-party payment systems typically charge 3% to 6%, Apple is ensuring that developers will never be able to offer consumers a viable, cheaper option outside of Apple’s walled garden. Perhaps even more appalling is that Apple’s proposal amounts a massive land grab where they could start collecting nearly a third of all consumer web transactions.
Building a stronger legal framework
It’s important to note that Judge Gonzalez Rogers and the Ninth Circuit called out the shortcomings with current federal competition law when they ruled against Epic’s claims that Apple had violated federal law. In their 2023 ruling, the Ninth Circuit said, “There is a lively and important debate about the role played in our economy and democracy by online transaction platforms with market power. Our job as a federal court of appeals, however, is not to resolve that debate — nor could we even attempt to do so. Instead, in this decision, we faithfully applied existing precedent to the facts.”
The Ninth Circuit ruling clearly underscores the need for new federal law to ensure a level playing field for existing and new market players. The Open App Markets Act, the American Innovation and Choice Online Act, and the AMERICA Act are vital to updating competition law in the US so that true competition can flourish and consumers can benefit from lower prices and continued innovation.
As if there weren’t enough headlines about big tech companies thumbing their noses at anyone who would suggest they need to be held accountable, Apple’s latest move should remind and inspire Congress to finally enact new, stronger guidelines to curb the pattern of abuse by big tech platforms.
Three trends that will shape media strategy in 2024
The past year has been a difficult one for many media companies. After the rapid bounce back from early Covid-era travails, more than 20,000 jobs were lost in the US media sector alone in 2023, with few publishers emerging unscathed.
At a local level, news deserts continued to grow, while digital darlings like BuzzFeed News shuttered and Vice Media filed for bankruptcy. Meanwhile, august publications like Bloomberg Businessweek and The Nation announced plans to reduce the frequency of their physical products, and the venerable National Geographic laid off the last of its staff writers.
At the same time, concerns increasingly arose about compensation and copyright infringements from Generative AI platforms, as well as the impact of tools like ChatGPT on media traffic and search habits.
Furthermore, dramatic drops in referral traffic witnessed by many leading publishers from major search engines and social networks added a further set of challenges for content creators to deal with.
2024’s major media trends
So, how do media companies respond to these many challenges and put their best foot forward in the year ahead? Here are three essential recommendations:
1. Revenue diversification is more important by the day
Media companies have known for some time that they need to diversify their revenues. This need is only going to become more acute in the year ahead. Although global advertising spending is expected to exceed $1 trillion for the first time in 2024, this growth is not being felt by most media companies.
Fresh analysis from WARC finds that spending on content media (TV, publishing, audio and cinema) has remained largely stagnant over the past decade. While advertising markets have boomed in that time, media players have seen their share of total advertising revenues decline from 71.0% a decade to 27.2% in 2024.
With these new monies flowing to the likes of Alibaba, Alphabet, Amazon, ByteDance and Meta, publishers and broadcasters “are fishing in a finite pool.” They’re essentially competing against one another for existing revenues, rather than attracting new ones, argues Alex Brownsell, Head of Content at WARC Media.
Put another way, “every media owner – from NBCUniversal and The New York Times, to Snap and JCDecaux, to Disney and Spotify – is competing for share of a largely fixed market,” Brownsell concludes.
Against this backdrop, the need to reduce advertising dependency and diversify income sources becomes more apparent than ever.
Within this, expect to see a greater emphasis on bundling as a means to drive loyalty and grow revenues. Blending different editorial propositions and purposes (e.g. news and non-news content) can be key, as The New York Times has shown. An ability, depending on your subscription type, to share access to your NYT bundle with friends and family can further deepen perceptions of value and reduce churn.
“So far most of these bundles have been intra-company collaborations,” observes Nic Newman in his latest annual trends report for the Reuters Institute. “But in 2024 we can expect to see some publishers partnering with other providers to add extra value to their bundle,” he predicts.
Globally, media players are also placing a greater emphasis on the value of hosting events as part of a revenue diversification strategy. Respondents to a survey by WAN-IFRA found that almost a fifth (18.8%) of publisher income comes from sources beyond advertising and subscriptions. This reinforces earlier findings shared by twipe, pointing to areas such as premium audio content, events and e-commerce as some of the most promising areas for income growth.
2. AI remains in the spotlight
The impact of Generative AI dominated every media conference I attended last year. That’s not surprising given how rapidly this technology is developing.
Lest we forget, its posterchild ChatGPT only launched in November 2022. It had 140 million users a month at the time. By its first birthday, it boasted 100 million a week , demonstrating how quickly it had been adopted, including by more than 92% of Fortune 500 companies.
Despite this general enthusiasm, “the response from publishers [to Generative AI] has been a mix of defense and offense,” observes Kevin Anderson at Pugpig. On the one hand, publishers are embracing this technology and exploring its possibilities in areas such as automation, personalization and the creation of new products.
In Norway, for example, Aftenposten, the country’s biggest daily newspaper (part of the Schibsted group), has used AI-generated audio articles to deepen engagement and engage younger audiences. Within six months of launch, the audience for these AI-generated audio stories was akin to that of their podcasts, demonstrating the potential efficiency of this tool once it has been set up.
At the same time, they are also looking to limit the ability of AI platforms to be trained by scraping their content for free. “What media companies are saying is AI won’t be built without us,” contends Vincent Berthier at Reporters Without Borders.
This tension is perhaps most publicly prominent at The New York Times. Just before Christmas, the company launched a lawsuit against Open AI and Microsoft for copyright infringement, highlighting how “millions of articles from The New York Times were used to train chatbots that now compete with it.” The move came hot off the heels of the Gray Lady appointing Zach Seward as their first Editorial Director of A.I. Initiatives. It’s a role that encompasses training, experimentation and prototyping, as well as identifying when the company should, and should not, use Generative AI.
The coming year will likely only see this type of dichotomy grow even more pronounced.
We can expect to see greater moves towards AI regulation, more lawsuits issued by publishers as they seek to protect their material. We will also see more partnerships, like those inked between OpenAI and major publishers including Axel Springer and the Associated Press. Meanwhile, insights from a new survey conducted by the Reuters Institute point to a focus on using this technology for “under the hood” services such as transcription and copyediting.
Whatever your approach, having a strategy for using and deploying AI remains key, not least because of the potential reputational harm when this is poorly executed.
3. Audience development grows in importance
A third area of focus for many media companies in the year ahead will be audience development.
Alongside continued exertions to reach and engage with new audiences, businesses will be increasingly absorbed in efforts to deepen relationships with existing consumers. This will involve creating opportunities for interaction, dialogue and feedback, as well as moves to understand, and better serve, audience needs and preferences.
It’s an area where AI can help, argues Dmitry Shishkin, the incoming CEO at Ringier Media. For Shishkin, data and suggestions generated by AI should be used as an “editorial co-pilot” that will inform content strategies and efforts to ensure “necessary distinctiveness.”
“Media success now thrives on differentiation and specialization,” he told Journalism.co.uk. That means an emphasis on “quality and focus,” he said, as newsrooms seek to “establish a compelling reason for repeated engagement.”
To help generate this engagement, outlets may look at fresh ways to develop more personal and direct relationships with audiences.
Tools like Subtext enable media organizations and creators to text their audience as you would a friend. New York Times Opinion columnist Farhad Manjoo holds Office Hours where he chats with readers on the phone. Tortoise Media in the UK holds open editorial meetings. Other outlets are experimenting with mechanisms like WhatsApp Channels and Discord to reach audiences in new places.
All of this is a counterpoint to what Kevin Anderson notes was the “volume-over-value strategies” that many publishers pursued for too long. Media markets have changed, Anderson argues, remarking that reaching large audiences through search and social and then serving them with ads is seldom a model that’s fit for purpose. “It’s all about relationships,” he writes.
In doing this, as we’ve argued before, publishers should be learning more from the Creator Economy. That’s a sector steeped in nurturing and leveraging relationships to drive revenues and loyalty.
Demonstrating that there is no one-size-fits-all solution, Francesco Zaffarano writes in Nieman Lab about the success of “platform-based news brands.” These outlets do have large audiences on platforms like Instagram, TikTok and YouTube. They harness this to generate revenues through paid partnerships with brands and donations using similar tactics as those in the Creator Economy.
Zaffarano cites outlets such as Impact in the USA, The News Movement in the UK, Hugo Décrypte in France and Ac2ality in Spain, as examples of this phenomenon. Arguably, their success hinges on aligning their work’s tone and relevance with their target audience, coupled with a presence in online spaces where their audience already spends a lot of time.
In those instances, this takes the form of mainstream social networks. But, for other audiences and media outlets, the answer could lie closer to home, or in smaller (even niche) online communities. Media players will need to continually analyze audience needs and habits to find the solutions that are right for them.
A strategic path forward for digital media
In the fast-evolving media landscape, 2024 promises to be yet another year of rapid transformation and adaptation. The year ahead looks set to be defined by the need for media companies to focus on diversifying revenue streams, navigating the complex realm of AI, and prioritizing audience development.
Revenue diversification remains a paramount priority for media companies, especially as their slice of global advertising revenues stagnates. In that climate, retention and fresh sources of income continue to be critical.
Secondly, the role and impact of Artificial Intelligence cannot be underestimated. Media players will continue to experiment with the potential benefits that this technology can unlock. Yet, they’ll do this while also seeking to protect their assets from being cannibalized by it. Navigating this tightrope will not be easy, but it’s one that organizations large and small will have to tread.
Lastly, as media companies strive to deepen relationships with existing consumers at the same time as expanding their reach, audience development has to take center stage. Fostering meaningful relationships will see the use of different tactics and platforms. Nevertheless, the end goals are the same: to demonstrate value and distinctiveness, and in turn drive retention and loyalty.
As media companies prioritize these areas, they need to be mindful of being strategic in these efforts. The pace of change can be daunting, but it also presents considerable opportunities. Leveraging new AI tools, diversifying revenue streams, cultivating stronger connections and delivering distinctive content are all areas that can enable media companies to do more than just survive 2024. Executed correctly, it should also enable them to thrive for some time in the years ahead.
Reputation matters. Don’t bend under platform pressure
People are increasingly opting out of the news. According to the Digital News Report 2023 from Oxford University’s Reuters Institute, 36% of people around the world sometimes or often actively avoid news. So it is no surprise that “news avoidance” has emerged as a hot topic among academics who study news media. It’s a growing problem, with major implications for society.
In fact, this topic was chosen as the main theme of the pre-conference at the 2023 International Communication Association (ICA), the largest conference in the field of communication. The event was packed with speakers and attendees who poured over analysis and future predictions. Presenters cited numerous studies that show that those who don’t read the news are less likely to vote and feel detached from the community.
Unsurprisingly, there is no simple solution to this crisis. However, there was a general consensus that there’s a need for an increase in public assistance, education, and policies that support news media.
But ask yourself: If there were public funding available to support news as a public service, would your organization qualify? Are you providing quality news? Or have you fallen prey to algorithmic enticements to chase clicks?
An alarming trend
The news avoidance trend has been underway for a long time, driven by several factors. For one, people are overwhelmed with the sheer volume of information. They also feel worn out by a constant flow of grim news, which is cognitively exhausting. And let’s face it – from TikTok dance challenges to cat memes – there are a ton of entertaining alternatives for people to tune in to online.
But while people are enjoying entertaining content on their social feeds, they are also consuming news on these platforms. Or at least they think they are.
People have developed a news-finds-me (NFM) mentality, which creates the illusion that they are well-informed about important news even when they are not. Because they have access to news (or a facsimile of it) through social media any time and all the time, people falsely believe important news will find them.
This becomes particularly alarming as we increasingly see social and search platforms actively back away from news brands. A New York Times article points to actions and announcements by the likes of Meta (parent to Facebook and Instagram), X (aka Twitter), and even Google that make news less visible.
For news publishers, these converging trends point to a shrinking audience and a fiercely competitive environment for attention.
Attention-seeking behavior
One temptation is writing for eyeballs. Anyone vying for attention online knows that clickbait, rage-bait, and sensational news perform well in the digital marketplace. Arguably, social platforms incentivize this type of content.
Sadly, low-quality content typically outperforms high-quality news in terms of today’s measures of ROI. Sensational stories, aggregated news, and gossip are cheap to produce and easy to spread.
Even more worrisome is the fact that numerous studies (including mine) have found that false information spreads more quickly and widely on social media than true information. Unlike quality news reports, which are bound by facts, fake news and titillating stories can be created solely to capture audience attention, with whatever claims or sensational statements capture the most views.
So how can genuine news compete in this marketplace of attention? The playing field seems rigged in favor of hyperbolic sludge.
What we are observing today is a systematic problem that no single innovative business model can break through. It is a combined result of a vicious news cycle, distracted consumers, the dominance of platforms, and more. At least the growing journalism crisis provides a clear call to action. Media scholars even say that fake news is the best thing that’s happened to journalism. It allows high-quality news media to shine.
But while scholars continue to see the value of quality news, the trend of news avoidance among general audiences continues. Not only is it critical that we find a means to support the production of quality news, but we must also figure out how to re-engage audiences with it.
Solutions for journalism
From growing cries for social platform reform to tax-based and remunerative approaches, there are voices demanding public interventions to support sustainable news. In the U.S., legislation designed to support local news is increasingly popping up in Congress and state legislatures.
Given their dominance in the consumption of news, platforms should be pressured to incorporate measures of news quality into their ranking algorithms. Currently, several projects such as the Trust Project and NewsGuard provide credibility measures of news sites to elevate quality news.
Media literacy programs also offer some promise for publishers, as they emphasize the reputation of sources. For example, some university and local libraries keep track of reliable news sources and share the lists with residents.
In the meantime, news publishers must stick to the core values that are the foundation of journalistic work. Of course that is easier said than done given all the systematic obstacles listed above. The unfortunate reality is that quality news is expensive to produce. And, from the available metrics, this so-called high-quality news is not sufficiently valued in the marketplace.
However, reputation matters in the media business. Therefore, competing on the social platforms’ terms – with eyeball-chasing clickbait – won’t solve the attention deficit and will likely only exacerbate the misinformation problem (as people skim misleading headlines). While publishers may have to pivot to what’s working on social platforms (video, for example), they must not sacrifice core standards.
In the post-truth era, trust is the most valuable capital. Maintaining journalistic quality is the only way to protect the business in this tumultuous time. News providers are wise to remember that reputation is difficult to build but easy to destroy.
About the author
Jieun Shin (Ph.D, University of Southern California) is an Assistant Professor in the College of Journalism and Communications at the University of Florida. Her research explores information diffusion on social media focusing on misinformation and news use.
Allowing big tech to monopolize AI is risky business
Artificial Intelligence (AI) is a groundbreaking yet potentially problematic technology. Despite its many possible positive applications, there are many concerns about the potential threats of AI, from disseminating misinformation to surveillance and democratic disruptions. Exacerbating the risk of harmful applications, concerns have arisen around the stifling of innovation and how AI will develop if just a handful of big tech companies dominate the playing field.
Open Markets Institute and the Center for Journalism and Liberty’s new report, AI in the Public Interest: Confronting the Monopoly Threat, looks at some of the major concerns around the development and applications of AI. It also examines the potential monopolistic influence of the Tech giants, (Google, Amazon, Microsoft, Meta, and Apple) on the evolution of AI. As the authors posit, “How AI is developed and the impact it has on our democracies and societies will depend on who is allowed to manage, develop, and deploy these technologies, and how exactly they put them to use.”
Authors Barry Lynn, Max von Thun, and Karina Montoya highlight government responses to concerns in early-stage regulations. Actions in Europe include the EU’s Artificial Intelligence Act, while the UK’s competition authority delves into the competition landscape of foundation models. In the US, the Biden Administration outlined a Blueprint for an AI Bill of Rights and issued a comprehensive Executive Order targeting AI-related harms.
The dangers of monopolist AI development
The report examines the tech giants’ structures and the behaviors of controlling foundational AI technologies. The influence of major tech corporations extends to the entire spectrum of innovation within the Internet tech stack, allowing them to (broadly) control the direction, speed, and nature of innovation. The authors suggest that these companies’ stronghold over “upstream” infrastructure empowers them, for example, to identify and suppress potential rivals through various means, directing the entire “downstream” ecosystem to serve their interests.
The authors call out several harms that can result from this dominant role in the evolution of AI:
- Suppression of trustworthy information: Restructuring communication and commercial systems can hamper individuals’ ability to access, report, verify, and share reliable information.
- Spread of propaganda and misinformation: AI can enable personalized manipulation of propaganda and misinformation (at scale), intensifying their political, social, and psychological impact. The reach and power of tech giants, combined with generative AI capabilities, elevate the effectiveness of state-level and private actors in manipulating public opinion.
- Addiction to online services: The rise of social media, gaming, and other online services has been linked to addiction and mental health issues, particularly among minors. Monopolistic platforms, prioritizing screen time and viral content, can exploit generative AI’s ability to customize and target content, intensifying harmful effects.
- Employee surveillance: Tech corporations may utilize surveillance and AI to monitor employees, which would impact privacy and fair employment practices.
- Monopolistic extortion: Through control of ecommerce platforms, app stores, and other gateways, corporations can extract fees from sellers and dictate business terms.
- Reduce security and resilience: Concentration in the core infrastructure poses security risks as businesses and governments increasingly incorporate AI.
- Degrading essential services: Generative AI can reduce quality by producing large volumes of inaccurate content.
Applying competitive legal measures
History reveals that competition laws, antitrust measures, and regulations are vital to prevent powerful corporations from exploiting groundbreaking technologies. The authors advocate for effective oversight and control mechanisms. Applying tools to regulate corporate behavior and industry governance empowers the public, ensuring consumers benefit from these technological advances. This approach facilitates the protection of individual and public interests through regulatory practices.
Recommendations for immediate action:
- Stop large tech companies from controlling AI: Make big tech companies change their plans when they try to control the development of AI through deals and partnerships.
- Share large tech company data with everyone: Agree that the information big companies collect should be shared with everyone and make rules about who can use this data to benefit the public.
- Protect artists’ and writers’ work: Make sure the big companies can’t steal or misuse the work of artists, writers, and other creative people.
- Check if large tech companies are a security risk: Look closely at how big companies might risk the country’s safety and ensure they can’t control everything and make it safer.
- Protect people from digital tricks: Make strong rules to stop big tech companies from tricking and exploiting workers and contractors online.
- Stop unfair treatment by large tech companies: Make it illegal for powerful tech companies to treat people and businesses unfairly when providing important services.
- Acknowledge the importance of cloud computing: Make sure the big tech companies don’t have too much control over it by treating it like a regulated utility.
- Make laws work together: Make sure the people enforcing laws about fair competition and privacy work together closely.
Fair market
The authors suggest market structures ensure AI serves the public interest and remains subject to democratic control by citizens, not corporations. The Biden White House is adopting a “whole-of-government” strategy including privacy, consumer protection, corporate governance, copyright law, trade policy, labor law, and industrial policy to deal with the AI trajectory.
The report concludes the more seamlessly these regulatory frameworks are integrated in the United States and globally, the more effective the process. By leveraging the collective power of diverse regulatory mechanisms, AI can become a force for the common good, guided by democratic principles and serving the welfare of the people.
DCN submits comments on Artificial Intelligence
In response to the US Copyright Office request for comments, DCN submitted these comments regarding the legal and public policy concerns associated with artificial intelligence (AI), specifically Generative AI (GAI).
In these comments, we noted the strong protections for copyright holders and the importance of such protections for society at large. We also argued that GAI systems cannot claim a blanket “fair use” exemption for their use of copyrighted content. In light of the fact that over 10 lawsuits had been filed at the time these comments were filed, we ask the Copyright Office to “make clear that use of copyrighted works to train AI is not per se a fair use, and to explain how each of the fair use factors would apply to various AI training scenarios.” Our hope is that clarification from the Copyright Office will allow courts to move expeditiously to resolve copyright infringement claims, provide greater certainty for copyright holders and hasten the burgeoning marketplace for the licensing of copyrighted works by AI companies.
The case against Meta’s manipulative business model
This week, 41 state attorneys general along with the District of Columbia filed lawsuits against Meta for creating highly addictive features that harmed the mental and physical health of children. The lawsuit is the latest in a series of revelations, inquiries and legal challenges focused on the allegedly misleading and negligent behavior of Meta with regard to the impact on children and teenagers from their services.
It’s unusual—and significant—for so many states to unite in a bipartisan effort to hold a Big Tech company accountable for consumer harms. The coordination shows states are prioritizing the issue of children and online safety and combining legal resources to fight Meta, in a similar vein as prior actions against Big Pharma and Big Tobacco
Actively addictive by design
For years, public health organizations and consumer groups have warned about the dangers of social media use by teens and children. Key features of Instagram and Facebook have been specifically called out as harmful. Numerous studies have shown that this segment of the population is especially susceptible to harmful psychological effects from design features, such as the “like” button, which research has found to be one of the most toxic components of social media.
Meta-designed notifications are particularly effective at repeatedly drawing young consumers back into their platforms while the Meta-designed algorithm keeps them engaged in the service for as long as possible so that the company can serve microtargeted ads. Such features include “infinite scroll,” persistent notifications and alerts, and autoplay of Stories and Reels. Other studies have shown that filters and other photo-altering features increase the incidence of body image issues among teenage girls.
The states’ lawsuit alleges that Meta deployed all of these tactics and more to “discourage young users’ attempts to self-regulate and disengage with Meta’s platforms.” The states included an enlightening direct quote from Sean Parker, founding CEO of Facebook:
“The thought process that went into building these applications, Facebook being the first of them . . . was all about: “[h]ow do we consume as much of your time and conscious attention as possible?” That means that we need to sort of give you a little dopamine hit every once in a while, because someone liked or commented on a photo or a post or whatever. And that’s going to get you to contribute more content and that’s going to get you . . . more likes and comments. It’s a social-validation feedback loop . . . exactly the kind of thing that a hacker like myself would come up with, because you’re exploiting a vulnerability in human psychology. The inventors, creators—me, Mark [Zuckerberg], Kevin Systrom on Instagram, all of these people—understood this consciously. And we did it anyway.”
The lawsuit goes a step further to allege that Meta misled the public about the dangers of using their services. In 2021, former Meta employee Frances Haugen came forward as a whistleblower to not only confirm that these harms were happening to children, but also reveal that Meta executives knew all along about the dangers from their own internal studies but chose to put profits over the safety of their products. In addition, the lawsuit alleges that Meta attempted to push the public narrative in the opposite direction by “routinely publish(ing) profoundly misleading reports purporting to show impressively low rates of negative and harmful experiences.”
Wider implications to watch
The proceedings inside the courtroom will be fascinating to watch. But I will also be closely watching two things outside of the courtroom:
Advertiser response
First off, it will be fascinating to see whether advertisers will change their buying habits in the wake of these allegations. Advertisers have known about the problems associated with social media for years. Despite some public hand-wringing from their trade association and boycott threats, marketers’ buying habits are largely the same today. At some point, one would expect marketers to shift their ad budgets away from financially supporting this toxic content platform to premium environments that better pair with the brand identity they want to cultivate.
Regulatory response
Secondly, I have to wonder if this may spur Congress to finally pass meaningful privacy or kids safety legislation. Every time there is a scandal and/or lawsuit involving one of the big tech platforms, there are renewed calls for legislation to regulate how they collect and use consumer data, to impose liability for the harms occurring on their services, and to create rules for how algorithms can be deployed among other things.
As a result of press coverage of these allegations, some of these bills might even get approved by the relevant committee(s). However, to date, none have been brought to the House or Senate floor for a vote. This Congress is particularly dysfunctional, but there is a decent chance that public officials heading into an election year might be shocked enough to coalesce around putting some guardrails on social media companies. Parents of children can be an influential voter base.
Stepping back a bit, all of these revelations about the dangers of social media and the abhorrent behavior of social media companies continue to fuel a global conversation about the role and impact of data, algorithms, surveillance advertising and unfiltered content. Lawsuits and legislation, which are getting smarter and more focused, will continue to draw headlines and potentially lead to liability for the worst actors. In the meantime, I am going to go give my kids an extra hug.
Five plain truths about AI
“The rise of AI is an existential threat for media companies.”
“The rise of AI is a disruptive opportunity for media companies greater than the Internet itself.”
I overheard both statements in the last week. How can both be true at the same time?
While I may not be able to square that circle, I do know that DCN has spent the last decade focused on the future and not shying away from difficult questions like these. And, for the past six months or more, we have been among those immersed in the impending upheaval and unprecedented opportunity heralded by everyone from AI doomsayers to evangelists.
While the questions about the future of AI in the media are far from answered, there are a few plainly obvious truths emerging as we explore the full potential of AI.
- The Large Language Model (LLM) data sets on which generative AI is being trained have been built upon what may well be the most extensive violation of copyright in history. The power and promise of AI to reshape industries is rooted in intellectual property that is a necessary ingredient in the equation. That bad math, that bad faith, must be recalculated and recalibrated in order for AI to evolve in a way that aligns with the true spirit of this extraordinary innovation.
- Many challenges of the last decade remain constant in the AI era. Market power and abuse is a profound problem. It would be naive to rely on the generosity of trillion-dollar companies to silo negotiations to train tech companies’ large language models from the impact and the needs of the whole of the media business.
Consider the way in which Google has historically argued that it doesn’t detract from media sites’ revenue because it drives traffic to them. On the contrary, it is well understood that “search results” have become overwhelmed with advertising and offer “snippets” (scraped and trained by publishers’ sites) that often satisfy the user without having to click through. Generative AI takes this so much further, by allowing the search engine to compile information from a multitude of sites—without necessarily crediting any of them, much less driving traffic. - Privacy concerns around LLMs need more attention. Somehow the excitement and ready access to real-time output has swept this under the rug. Recent history should have taught us better.
Clearview AI, infamous for scraping billions of images across the internet without consent to fuel facial recognition, is the subject of a new book, Your Face Belongs to Us. And we learned in unsealed court docs earlier this year that Facebook used data brokers to train its machines to microtarget ads when they were forced to stop buying data outright. LLMs create a deep new well of data that is being opaquely collected and that will inevitably be exploited in ways consumers would not expect—or approve of. - Generative AI will increasingly be used for storytelling, whether in the fields of news or entertainment. However, responsible and successful media organizations recognize its limitations and human hands will still shape the creative output of these tools. As long as this storytelling involves humans at any point in the creative process, this content will require protection under the law. Otherwise, the devaluation of creativity and truth will be inevitable.
- The sustainability of the free press is an essential ingredient for democracy. A free press supports an informed public, which holds the powerful accountable. Healthy competition and capitalism have unlocked opportunities and efficiencies that media companies have benefited from, and there’s no reason to believe that the AI era will be different. However, given the unhealthy dominance of the big technology companies, the last decade has been perilous for the press.
Therefore, any conversation around the future of AI must be anchored on the needs of an informed public, which starts and ends with an ecosystem that supports professional local and national newsrooms.
Given what we have witnessed over the past decade in the proliferation of mis- and disinformation, which has leveraged technology and vacuums in trust, the generative power of AI must give us pause. With power comes responsibility, and these are tools that we must use, and govern, wisely.
As someone who is listening, reading and thinking about what’s next as a full-time job, the acceleration of AI and its impact on media has got me on the edge of my seat. I’ve witnessed firsthand what media organizations have accomplished with AI for decades, and eagerly anticipate continued innovation. I also respect and acknowledge the efforts of media organizations to defend their work product, their creative output, the reporting, writing, photography, cinematography… as so much more than a mere data set.
We know our work. We know our worth. And we know our audiences and respect their values, which is why they value us. While the questions and innovations will keep on coming, there are unequivocal truths that should guide us as we continue to build a strong media ecosystem.
Publishers and platforms face off over the value of news
Internationally, regulators are increasingly taking measures to address the impact that platforms have on the news business. In response, big tech platforms are trying to make the case that news is not central to their popularity or success, and going so far as to block news and political content in the face of new journalism-funding regulations taking shape around the world.
“This is now a global phenomenon and big tech platforms like Meta and Google can’t keep using bullying scare tactics. They have to show up and be prepared to negotiate meaningfully,” according to Jordan Guiao, research fellow at The Australia Institute’s Center for Responsible Technology.
Lawmakers in many different jurisdictions have begun to respond to the critical damage big tech platforms have inflicted on the funding model of the world’s media business. Those steps include the News Media Bargaining Code in Australia, the Online News Act in Canada and the proposed Journalism Competitive and Preservative Act in the U.S. (which Digital Content Next has endorsed). These laws are designed to mandate these tech companies to compensate publishers for the inclusion of the news content that is shared, or found their platforms.
To bolster its position, Meta points to a study it commissioned from NERA economic consulting group which concluded that having platforms pay for news content wasn’t justified. However, advocates for the media industry dispute the accuracy of the findings.
“People who consume news tend to spend a lot of time on a platform,” said Paul Deegan, chief executive of Canadian trade association News Media Canada. “They go there for news. They come back for more. They’re an attractive demographic in terms of skewing higher on educational attainment and income.”
“Just in terms of value, [big tech platforms] get tremendous value from news,” he said.
Devaluing the news
However, according to Instagram chief Adam Mosseri: “[F]rom a platform’s perspective, any incremental engagement or revenue they might drive is not at all worth the scrutiny, negativity (let’s be honest), or integrity risks that come along with them.”
So, in response to increasing pressure to compensate publishers for news, and their own research findings that it doesn’t provide sufficient value to them, platforms have pulled back on including news content on their services.
In February of 2022, Facebook dropped the word news from users’ “news feed.” And in June this year, publishers noted a significant drop-off in traffic from the site, suggesting an algorithmic adjustment to devalue news. And, after the launch of Threads earlier this month, Mosseri said that the platform would not take any steps to “encourage [politics and hard news] verticals.”
Tit-for-tat
In a direct response to policy efforts platforms have gone so far as to block news content altogether. In 2021, Meta not only blocked users in Australia from seeing news content on Facebook but prevented them from posting links to any news stories, regardless of where they were published. In less than a month, the company relented and has since signed licensing deals with publishers in Australia. Job postings in the country’s media sector are up 46% as of April this year.
However, with the passage of the Online News Act in June, Meta and Google have taken a harder tack against these regulatory efforts. Already the platforms have canceled previously struck deals with Canadian publishers and have started to block news in the country.

Reprinted with the permission of Luke LeBrun (@_llebrun) Editor, PressProgress
In February, Google conducted a test to assess the impact of blocking news access for Canadians altogether as it evaluated possible responses to the Act. The company is reportedly holding off on fully implementing its response until the regulations are made by the Canadian Radio-television and Telecommunications Commission. Since the passage of the Act last month, Meta has started to intermittently block Canadian news sources on its various services.
According to Canadian Prime Minister Justin Trudeau, Canada has no plans to back down. In fact, they’ve taken the fight to a familiar battlefield on this issue: advertising. Publishers and the federal government in Canada have pulled their advertising from Meta’s services. Per regulatory filings, Canada accounts for $3 billion of Meta’s $117 billion in annual revenues. “The company is running the very real risk of losing more in revenue than they would pay news businesses under the Online News Act,” Deegan told the Financial Times.
The battle has escalated to news publishers rejecting Meta’s ads on their sites, which were purportedly intended to inform Canadian audiences about the news blocking initiated by the big tech platforms in Canada.
Next steps?
Navigating what comes next will be the challenge facing both regulators and publishers.
As of now, a version of the JCPA in California passed the state assembly in a 55-6 vote in June. Despite this, the bill has been put on hold for two years, with an initial state senate hearing scheduled for July 11 this year pushed back to 2024.
Other commentators in Canada view the contentious moves in response to early attempts to regulate the big platforms as an opportunity to further address the wider problems caused by the big tech companies.
“While the government of Canada certainly does not have the power to go back in time and block the consolidation that has occurred in the digital ad market, it is able to empower our competition and privacy commissioners to conduct an investigation into how Big Tech operates in the Canadian ad tech market,” wrote Taylor Owen, Beaverbrook Chair in media, ethics and communications, and Supriya Dwivedi, director of policy and engagement at the Centre for Media, Technology and Democracy.
As well, there’s skepticism that Meta and Google can survive the reputational risks of continuing to block legitimate news sources on their platforms.
“Essentially, I don’t see blocking of news as a viable action for Meta and Google. From the Australian perspective, the news block was a bluff, and we called them out on it. In Canada I believe this will be much the same. And if they prolong it in any way, their reputation as a platform for misinformation will only be validated,” said the Australia Institute’s Guiao. “Meta and Google need news content to legitimize the accuracy, dependability and truthfulness of the information in their platforms.”
Why Lina Khan won’t back down
It’s no small thing to be named Chair of the Federal Trade Commission (FTC). Unsurprisingly, Lina Khan’s path to that position is an impressive one. Significantly, it was characterized by deep study and evaluation of monopolies and antitrust law. Absolutely everyone expected this to be central to her tenure, and she has not disappointed in that regard. With big tech companies in her sights, she’s taken on Meta for antitrust, Google for its ad practices, and Amazon for its consumer practices around Prime.
But there are those questioning Khan’s strategy of aggressively filing cases. The criticism built to a crescendo recently when U.S. District Court Jacqueline Scott Corley declined to block the Microsoft acquisition of Activision, handing a defeat to the FTC which had hoped to snuff out a potential monopoly in its infancy.
The FTC alleged that Microsoft’s acquisition of Activision (and its bevy of popular games such as Call of Duty) could lead to a dominant position for Microsoft in the gaming sector. Microsoft quickly sought to nip the FTC’s concerns in the bud by agreeing to keep popular game titles available on rival platforms. There is, however, some debate around whether or not that was enough to address competition issues.
There’s also some debate about whether Judge Corley correctly interpreted the law. Matt Stoller notes that the Clayton Act guards against mergers that “may substantially lessen competition” while Judge Corley stated in her opinion that FTC had not proven the deal “will substantially lessen competition.” Unfortunately for the FTC, the shift from “may” to “will” sets a substantially higher bar and one that courts have supported in recent history.
Corley’s decision has led numerous groups (some of which receive an outsized portion of their funding from big tech companies) to loudly question Kahn’s strategy at the FTC. It’s a bit hard to take these Monday morning quarterbacks very seriously, however, because some of these same groups also argue at every turn to limit the FTC’s regulatory authority. Their criticism of the FTC and the Department of Justice Antitrust Division is so routine and frequent that you are left to wonder whether they envision any role for the federal government in ensuring a fair marketplace. Most, however, understand that the FTC plays a critical role in enforcing federal consumer protection laws that prevent fraud, deception, and unfair business practices—including those that are anticompetitive.
As anticipated, Khan is clearly trying to shift the courts’ interpretation of competition and antitrust law. Of her strategy, she’s unabashedly stated that “you lose 100% of the shots you don’t take.” And given her track record so far, a rumored major antitrust suit in the works against Amazon along with newly-released merger guidelines from the FTC and Department of Justice, she’s going to keep taking shots—big ones. On top of these high-profile cases, as Jessica Rich, former Director of the FTC Bureau of Competition, opined earlier this year, the FTC has seemingly shifted its focus from “whack-a-mole” enforcement to broader rulemaking efforts.
Khan’s efforts to move the FTC into a stronger enforcement position is one that reflects a global trend, as regulators around the world ramp up their competition and antitrust efforts. For example, Europe recently enacted the Digital Markets Act and Digital Services Act, which will lead to a new wave of crackdown on dominant companies. This isn’t just Kahn’s concern, or an American concern. Policymakers around the world are trying to level the playing field.
As to Kahn’s efforts to step up the FTC’s position as rule maker and enforcer, it is critical to recognize that the history of the FTC’s authority and posture is characterized by ebbs and flows. Courts and Congress have empowered or restrained the FTC at various points in our history. For most of the last 20 years, FTC Chairs have had to operate with one eye over their shoulder.
Former Chair Jon Leibowitz secured high profile consent decrees with Google and Facebook over consumer privacy violations. These decrees were heralded as landmark agreements and have also provided the FTC with access and leverage with these companies. However, it is important to note that the consent decrees were pursued instead of expensive, more risky lawsuits. Now, with the benefit of hindsight, many have criticized the FTC for not doing enough to protect consumers and prevent monopolization. And this is where Kahn’s leadership comes into play in a big way.
Changing the courts’ interpretation of competition law is not easy. But this change—modernization—happens in every facet of law, and it is necessary to adapt to new factors. Until and unless Congress can break the permanent state of gridlock, we should support Khan’s efforts to establish a more proactive and robust role for the FTC, one that can foster more healthy competition to the benefit of consumers and businesses.
Opener art: Lina Khan, Competition and Regulation in Disrupted Times, Brussels, Belgium
Used under a CC BY-SA 2.0 license. Credit: Cory Doctorow
DCN stands with global media community as Google and Meta threaten to take down news in Canada
DCN stands with publishers around the globe in reaction to Facebook and Google’s efforts to undermine Canada’s new law to help address the imbalance in market power. Digital Content Next was one of 18 media organizations worldwide that issued a joint statement on July 5, 2023 in response to Google and Meta’s threat to take down news in Canada after Canada’s parliament passed the Online News Act (C-18) in June.
In the words of Canadian Prime Minister Justin Trudeau, “This is not just a dispute over advertising, it is also dispute over democracy. It’s a question of recognizing the role internet giants—like Facebook Meta, Google and others—have in our lives and therefore the responsibility they also wield. …this goes to the core of a free and informed society that is able to take responsible decisions in a democracy citizens need to have access to quality journalism that is properly paid.
The fact that Facebook doesn’t want to recognize the hard work of professional journalists is something that undermines the very fabric of democracy. So, Canada—and allies around the world—are going to stand strong and demonstrate that we will not flinch in our defense of fundamental foundational principles of democracy like a free, quality, informed press.”





