As media companies expand the use of AI to drive efficiency, many are seeking best practices to ensure AI is implemented responsibly and with minimal risk. A recent IAB study found that only one-third of publishers, brands and agencies have adopted formal AI governance tools, which underscores the need for greater accountability and transparency when AI is used.
But where should publishers start? An ethical AI framework provides a blueprint for how organizations can manage risk, protect data and demonstrate compliance when integrating AI into their operations.
Here are examples of each of the eight pillars of ethical AI. When implemented together, these best practices create a strong foundation to help media companies strengthen trust with audiences and partners.
1. Ethical AI Policies
Establishing clear policies that define how AI is being implemented provides the foundation for responsible AI governance. These policies should reflect a commitment to transparency, accountability, minimizing bias and protecting user privacy, and should be reviewed regularly to ensure they evolve with industry developments and regulatory changes.
Example: The Associated Press created a comprehensive policy around its use of generative AI, including who is responsible for monitoring the accuracy of AI output.
2. Transparency and Disclosures
Transparency helps establish greater trust in media. In addition to having an AI policy, publishers should make these policies publicly available and include disclosures when AI is used. A consistent communication strategy, supported by public-facing AI policies, strengthens accountability and promotes confidence among readers, advertisers and partners.
Example: Bay City News, a San Francisco Bay area nonprofit news organization, publicly shares how the team uses AI and adds in-depth context about the process behind each project.
3. Rights and Permissions
It’s also important that media companies secure applicable rights and permissions and the appropriate level of consent to use the information powering AI solutions. This might include establishing internal safeguards to prevent the misuse of proprietary or third-party content. This helps protect media companies from legal challenges and reinforces ethical content development practices.
Example: The New York Times provides guidelines for its staff that copyrighted material should not be input into AI tools, which prevents potential misuse of third-party content in AI training.
4. Accountability and Human Oversight
Since hallucinations and biases in AI exist, human oversight is essential. A “human-in-the-loop” approach ensures that AI outputs have been reviewed before being published. Organizations should assign clear roles for managing AI tools and outputs, and make sure qualified individuals are accountable for how these systems are used.
Example: The USA TODAY Network adds disclaimers to AI-generated article summaries that explain how key points were created and discloses that they were reviewed by a journalist prior to publication.
5. Bias and Fairness
AI models train on existing data, which can include biases that are then amplified. Media companies must implement strategies to identify and mitigate AI bias such as monitoring AI outputs for fairness and inclusivity. Developing a bias mitigation strategy strengthens credibility and supports fair representation across content and audiences.
Example: The Reuters Institute shared several examples from media leaders on identifying AI bias including how The Financial Times’ teams use internal checklists to assess whether AI-generated output from its content recommender system is unbiased and fair across demographics.
6. Privacy and Data Protection
Responsible AI also means demonstrating a strong commitment to protecting data privacy. Companies must ensure that AI systems comply with privacy regulations such as GDPR and CCPA and protect user data. Prioritizing data privacy can reduce legal risk and reinforce a company’s commitment to responsible use of technology.
Example: Graham Media Group, a Detroit-based media company, prioritizes reader privacy and security and shares its compliance with data privacy laws on its disclosure page and in its privacy policy.
7. Training and Education
Training and education are essential to implementing AI responsibly. Ongoing training ensures that staff understand the capabilities and limitations of AI tools and are made aware of any policy updates as the technology evolves. This includes both general training for all staff as well as proficiency training for individuals responsible for developing, deploying and monitoring AI systems.
Example: Radio-Canada launched a comprehensive AI literacy program to help staff better understand AI and how it should be used. The program includes a foundational training session to provide an overview of AI concepts and ethical considerations as well as follow-up workshops focused on practical applications.
8. Governance and Risk Management
Effective AI governance requires a structured approach to identifying and managing risk. Organizations should embed ethical AI principles into existing policies, ensure legal and regulatory requirements are met and assign clear accountability for oversight. Regular reviews and feedback loops help detect and address emerging risks, allowing companies to adapt as technology evolves.
Example: Gannett’s AI council brings together members from various departments to discuss ideas and assess risk when using AI in new ways.
Responsible AI isn’t about slowing progress but creating a foundation that supports innovation, transparency and trust. Media companies that integrate an ethical AI framework into processes will increase transparency, minimize risk and maintain the credibility that defines trusted media.
