Login
Login is restricted to DCN Publisher Members. If you are a DCN Member and don't have an account, register here.

Digital Content Next logo

Menu

InContext / An inside look at the business of digital content

From policy to practice: Responsible media AI implementation

Guidelines and examples from media leaders that will help media companies establish strong governance, minimize risk and uphold public trust while implementing AI.

June 30, 2025 | By Rich Murphy, CEO, President and Managing Director – Alliance for Audited Media@auditedmediaConnect on
Concept art showing areas of AI governance

As artificial intelligence becomes more embedded in editorial and business processes, media companies face increased pressure to ensure AI is implemented responsibly. This requires companies to develop a plan for AI use that covers several areas including bias mitigation, risk management, legal compliance and long-term governance.

In my last article, I shared real-world examples of how media companies are implementing ethical AI best practices for transparency and disclosures, bias and ongoing staff education. Here we go deeper into the steps media companies are taking to reduce risk, protect privacy and maintain editorial oversight while integrating AI tools into their processes. Together, these form the eight pillars of ethical AI.

Ethical guidelines and standards

Establishing clear policies that define how AI is used across editorial, marketing and operational teams is essential to increasing transparency and building trust with audiences. Already, some media leaders have not only created policies around AI usage but share them publicly – which offer some great examples to other organizations grappling with AI governance.

The New York Times outlines its AI policies as part of its ethical journalism handbook, which was developed for its editorial and opinion teams and is available to the public. The guidelines state the importance of human oversight and adhering to established standards for journalism and editing.

The Financial Times also made its AI governance publicly available by sharing its principles in articles that outline specific tools staff are integrating into their workflows. It also discusses its investment in skill development and how it has transformed into a company committed to AI fluency and innovation.

Media companies need to develop formal AI ethics guidelines that help guide staff and increase transparency with the public. However, it’s equally important to regularly evaluate these guidelines as technology evolves.

Rights and permissions

As part of their governance strategy, companies must also take steps to ensure that any content produced through AI does not infringe on intellectual property rights or violate content licensing agreements. This means securing applicable rights and permissions to use the information generated by the AI tools and creating internal processes to ensure that AI outputs do not use third-party content without permission.

The New York Times encourages staff to use AI to create content including quizzes, quote cards and FAQs. However, its guidelines state that copyrighted material should not be input into AI tools, which prevents potential misuse of third-party content in AI training.

The Guardian outlines its commitment to protecting content creators’ rights when selecting third-party AI tools by stating it will only use tools that have addressed permission, transparency and fair reward for content usage.

These practices can reduce risk and reinforce a publisher’s commitment to responsible content development.

Accountability and human oversight

Even sophisticated AI systems can produce biased, inaccurate or misleading output. To safeguard against this, media companies should take a “human-in-the-loop” approach and assign qualified individuals to oversee AI tools at every stage of use.

Bay City News, a San Francisco Bay area nonprofit news organization, maintains audience transparency by publicly sharing how the team uses AI including in-depth context about the process behind each project. When it created its award-winning election results hub using AI, human oversight including fact-checking was a vital part of the project’s success.

While BBC prohibits the use of AI to directly create news content, AI use in other areas such as research must be actively monitored and the outcomes assessed by an editor.

Wired also does not create content directly from AI, but the company states that if AI is used to suggest headlines or social media posts, an editor needs to approve the final choice for accuracy.

Privacy and data protection

As readers grow more concerned about how their personal data is collected and used, publishers must take steps to ensure that AI tools are deployed in ways that maintain legal compliance. AI governance must include the development of transparent data collection policies and adhering to privacy regulations such as GDPR and CCPA.

Graham Media Group, a Detroit-based media company, prioritizes reader privacy and security and shares its compliance with data privacy laws on its disclosure page and in its privacy policy. The company also uses an in-house AI tool to help employees streamline their workflows without relying on free AI tools or unsecured platforms.

BBC states in its responsible AI policy that if staff intend to include personal data in an AI tool, a data protection impact assessment must be completed prior to use.

Risk management and adaptation

Using AI introduces a range of potential risks such as bias and fairness that must be actively managed. Effective AI governance requires continuous monitoring and a proactive approach to identifying and addressing these risks.

BBC created its AI Risk Advisory Group that includes subject matter experts from legal, data protection, commercial and business affairs and editorial departments. The group provides detailed advice on potential risks of using AI in both the newsroom and across the company.

As AI technologies evolve, so must the ethical frameworks that support their use. By integrating ethical AI principles into daily operations, media organizations can protect their brands, maintain audience trust and demonstrate their value to advertisers and partners who seek reliable, trusted media environments.

Liked this article?

Subscribe to the InContext newsletter to get insights like this delivered to your inbox every week.