Login
Login is restricted to DCN Publisher Members. If you are a DCN Member and don't have an account, register here.

Digital Content Next

Menu

Policy / DCN perspectives on policy, law, and legislative news surrounding digital content

Five plain truths about AI 

August 24, 2023 | By Jason Kint, CEO – DCN @jason_kint

“The rise of AI is an existential threat for media companies.”  

“The rise of AI is a disruptive opportunity for media companies greater than the Internet itself.”  

I overheard both statements in the last week. How can both be true at the same time? 

While I may not be able to square that circle, I do know that DCN has spent the last decade focused on the future and not shying away from difficult questions like these. And, for the past six months or more, we have been among those immersed in the impending upheaval and unprecedented opportunity heralded by everyone from AI doomsayers to evangelists. 

While the questions about the future of AI in the media are far from answered, there are a few plainly obvious truths emerging as we explore the full potential of AI.

  1. The Large Language Model (LLM) data sets on which generative AI is being trained have been built upon what may well be the most extensive violation of copyright in history. The power and promise of AI to reshape industries is rooted in intellectual property that is a necessary ingredient in the equation. That bad math, that bad faith, must be recalculated and recalibrated in order for AI to evolve in a way that aligns with the true spirit of this extraordinary innovation.
  2. Many challenges of the last decade remain constant in the AI era. Market power and abuse is a profound problem. It would be naive to rely on the generosity of trillion-dollar companies to silo negotiations to train tech companies’ large language models from the impact and the needs of the whole of the media business.

    Consider the way in which Google has historically argued that it doesn’t detract from media sites’ revenue because it drives traffic to them. On the contrary, it is well understood that “search results” have become overwhelmed with advertising and offer “snippets” (scraped and trained by publishers’ sites) that often satisfy the user without having to click through. Generative AI takes this so much further, by allowing the search engine to compile information from a multitude of sites—without necessarily crediting any of them, much less driving traffic.
  3. Privacy concerns around LLMs need more attention. Somehow the excitement and ready access to real-time output has swept this under the rug. Recent history should have taught us better. 

    Clearview AI, infamous for scraping billions of images across the internet without consent to fuel facial recognition, is the subject of a new book, Your Face Belongs to Us. And we learned in unsealed court docs earlier this year that Facebook used data brokers to train its machines to microtarget ads when they were forced to stop buying data outright. LLMs create a deep new well of data that is being opaquely collected and that will inevitably be exploited in ways consumers would not expect—or approve of.
  4. Generative AI will increasingly be used for storytelling, whether in the fields of news or entertainment. However, responsible and successful media organizations recognize its limitations and human hands will still shape the creative output of these tools. As long as this storytelling involves humans at any point in the creative process, this content will require protection under the law. Otherwise, the devaluation of creativity and truth will be inevitable. 
  5. The sustainability of the free press is an essential ingredient for democracy. A free press supports an informed public, which holds the powerful accountable. Healthy competition and capitalism have unlocked opportunities and efficiencies that media companies have benefited from, and there’s no reason to believe that the AI era will be different. However, given the unhealthy dominance of the big technology companies, the last decade has been perilous for the press.

    Therefore, any conversation around the future of AI must be anchored on the needs of an informed public, which starts and ends with an ecosystem that supports professional local and national newsrooms.

Given what we have witnessed over the past decade in the proliferation of mis- and disinformation, which has leveraged technology and vacuums in trust, the generative power of AI must give us pause. With power comes responsibility, and these are tools that we must use, and govern, wisely.

As someone who is listening, reading and thinking about what’s next as a full-time job, the acceleration of AI and its impact on media has got me on the edge of my seat. I’ve witnessed firsthand what media organizations have accomplished with AI for decades, and eagerly anticipate continued innovation. I also respect and acknowledge the efforts of media organizations to defend their work product, their creative output, the reporting, writing, photography, cinematography… as so much more than a mere data set. 

We know our work. We know our worth. And we know our audiences and respect their values, which is why they value us. While the questions and innovations will keep on coming, there are unequivocal truths that should guide us as we continue to build a strong media ecosystem.