Login
Login is restricted to DCN Publisher Members. If you are a DCN Member and don't have an account, register here.

Digital Content Next logo

Menu

Policy / DCN perspectives on policy, law, and legislative news surrounding digital content

Generative AI: threats and promise for a new world

April 20, 2023 | By Chris Pedigo, SVP Government Affairs – DCN@Pedigo_ChrisConnect on

At DCN, we’ve been diving deep into Generative AI – trying to understand what it is and what it isn’t and sort through the implications for professional content creators. We have met with hundreds of executives, tech specialists, journalists, artists, and creators who work within our member companies to create and monetize premium original content. We’ve heard first-hand what the content community is excited about and what gives them pause. Based on these conversations and our own research, we see several topics around Generative AI (GAI) that merit a closer look.

Protection for creators

Generative AI systems need to be designed in a way that respects professional content creators and the value of their IP. Perhaps the biggest concern is the loss of control over the content, in which they have heavily invested. GAI systems train by using digital content such as journalism and photos. However, they also scrape that content for reuse in a variety of ways and for a variety of purposes – some of which may undermine the business model of the content creator.

While the scraping of original content for training may seem relatively benign, Generative AI companies will clearly reap huge benefits. The use of premium trusted content helps these systems understand cultural nuances and avoid problems of misinformation and factual inaccuracies. If the old axiom of “garbage in, garbage out” applies, companies profiting from GAI should appropriately value premium trusted content.

That said, scraping of content for reuse seems highly problematic as the creator might lose the ability to distribute and monetize the content as they see fit. We’ve heard stories of Generative AI systems providing subscription content to non-subscribers and amalgamations of general news content without any attribution to the organizations which funded its creation. Stripping content creators of control over their work effectively deprives them of the ability and incentive to invest in new content. It seems like there should be a win-win here where GAI systems fairly compensate and credit professional content creators for their work.

Transparency

Companies that develop GAI systems should be transparent with the public so that we can fully assess the impacts on our society and address the consequences. If Generative AI fulfills its promise, then whole industries and sectors will be radically changed. In particular, consumer experiences with news and entertainment are likely to be upended.

With all that change, intentional and unintentional consequences are likely to occur. Generative AI systems have been shown to be flat out wrong, making up convincing sounding sources or generating “quotes” and articles to back up their responses which may be misattributed to individuals, institutions, or media brands.

There is an open question about the extent that these systems could be manipulated by bad actors looking to flood the public square with misinformation. As anyone can see, the world isn’t a perfect place. And, sadly, the internet is loaded with hateful, harmful speech. More subtly, however, there are conscious and unconscious biases baked into all kinds of reporting and opinion pieces. To the extent that Generative AI systems are training on content they scrape from the web, they may be recycling hate speech or perpetuating biases.

Therefore, allowing these systems to be developed and operate in a black box would invite all kinds of problems without an opportunity to correct mistakes before they run wild. Most of the early legislation around AI calls for transparency so that regulators and researchers can assess the potential impacts and offer solutions before actual harms are committed.

Competition

The tech industry currently has a competition problem – a lack of competition to be exact. While those dominant companies are currently under intense antitrust scrutiny, there is a concern that they may use the hype cycle of Generative AI to wrongly argue that competition exists and that regulators need to back off.

But the reality is that policymakers should be watchful to ensure that major tech companies don’t extend their dominance to the AI market. Generative AI systems are expensive to create and maintain. With such a high barrier to enter this marketplace, there is a danger that the biggest, most powerful tech companies will leverage their position and unrivaled resources to dominate the nascent GAI field.

This same playbook has been used before – Facebook leveraged its dominant position to buy out the competition. Google allocated its search dominance to acquire a monopoly position in the digital advertising infrastructure. And Amazon continues to leverage its position to drive competing ecommerce platforms into irrelevance. We should be careful to ensure that GAI systems aren’t used to maintain or increase dominance in markets and that these systems aren’t self-preferencing services from a handful of major companies to the exclusion of competitors. 

TL:DR

Undoubtedly, Generative AI is an exciting new tool that offers efficiencies for artists, journalists, and other creators along with the promise of a bright new future for industries and consumers alike. And these changes are coming fast. Undoubtedly, the speed and scale of the development and deployment of GAI systems will only accelerate.

In the right hands, these new tools could revolutionize the way we interact, create, problem solve and so much more. However, bad actors also have access to these powerful tools. Therefore, problems with competition, misinformation, election meddling, and sabotage are likely to worsen.

As an industry and society, we should encourage the development of ethical guardrails to minimize the harms and maximize the potential associated with Generative AI technologies. In doing so, we can strengthen our commitment to protect consumers and incentivize new and established voices.