As the media’s use of artificial intelligence grows, so do questions from audiences and advertisers about how they’re using it. While media companies experiment and find ways to integrate AI tools into their businesses, there’s also a need for increased transparency and disclosure around their use of AI.
Here are several ways media companies are leveraging AI in their operations and practical tips for enhancing transparency and fostering open communication with audiences.
How media companies are implementing AI
If AI could be summed up in one word in the past year, it would be experimentation. Many media companies have begun to add AI tools to their workflows to accomplish tasks such as brainstorming headlines, transcribing interviews, summarizing research or analyzing data. These use-cases help media companies improve workflows and increase efficiency.
Jennifer Bertetto, president and CEO of Trib Total Media and 535media, said her company employs AI-driven solutions to streamline reporting processes and surface content for its audience.
“AI is instrumental in personalizing content recommendations, refining advertising strategies and driving overall business efficiency,” Bertetto said. “By harnessing AI’s capabilities, we are committed to delivering superior content, fostering deeper audience engagement and ensuring sustainable growth.”
Best practices for disclosing AI use
A recent survey by Trusting News and Online News Association (ONA) revealed that nearly 94% of news consumers want newsrooms to disclose how they use AI. And more than half seek to understand what tools were implemented and how they were used during the reporting process.
To help media companies determine how to communicate this to their audiences, Poynter developed a guide to creating an AI ethics policy, which includes recommendations such as forming an AI committee and including representatives from all departments to weigh in on issues and set policies.
Aaron Kotarek, who was recently senior vice president of audience and operations for the Honolulu Star-Advertiser/Oahu Publications, Inc., and is now general manager and chief operating officer at The Spokesman-Review, led the Star-Advertiser’s AI task force. “We took great pride at OPI as it pertained to accuracy and credibility. We did not want to risk our reputation in the communities we serve by executing something rash,” Kotarek said.
Poynter also recommends dividing AI decision-making into three categories — audience-facing, business uses and back-end reporting. They also suggest that media organizations develop standards around each of these areas.
Industry resources for media companies
In addition to Poynter’s starter kit for newsrooms, there are additional resources media companies can turn to for guidance.
Trusting News, an organization to help journalists earn audience trust, created an AI Trust Kit for newsrooms to help media companies establish internal guidelines and public-facing policies for governing AI practices. The kit also offers tips on what to disclose including why journalists use AI, how they use it and how human oversight has been integrated into the editorial process.
The National Institute of Standards and Technology (NIST) created an AI Risk Management Framework to improve the trustworthiness of AI. The U.S. Department of Commerce developed this document to help companies think about, communicate, measure and monitor AI’s potential risks and positive impacts.
The Alliance for Audited Media is also working with publisher clients to help them navigate AI concerns and understand and implement industry best practices, drawing on experience certifying media companies in privacy and data protection programs and journalism ethics.
As AI continues to shape the media landscape, its responsible implementation and transparent disclosure is key to maintaining audience trust. By embracing best practices and leveraging industry resources, media companies can continue to innovate while upholding their commitment to quality and accountability.