Research / Insights on current and emerging industry topics
What newsroom leaders say matters most in AI adoption
As AI enters editorial workflows, publishers are emphasizing clear policies, staff upskilling, and strong journalistic judgment, treating generative tools as support rather than a substitute.
February 9, 2026 | By Jessica Patterson – Independent Media Reporter
Publishers enter 2026 facing unrelenting pressure to innovate with generative AI, colliding with the need to protect editorial standards and audience trust. As AI slop floods the media ecosystem, publishers are realizing that their competitive advantage isn’t how fast they can use AI, but how safely they can integrate it, because it impacts trust and the direct relationships they have with their audiences.
Recent research in Canadian newsrooms reveals a cautious reality. Amid the hype, media leaders are not hiring for AI expertise. Instead they’re doubling down on core journalism skills, treating AI as an efficiency tool rather than a replacement for human judgement. It is important to note that this research focused specifically on the editorial use of AI, where the priority is the preservation of audience trust, rather than on use of AI for business growth or commercial purposes.
Terra Tailleur and I conducted interviews with CEOs and editors-in-chief at 12 media organizations, ranging from public broadcasters, national news outlets and wire services, to regional dailies and independent digital startups, over the course of eight months to explore how Canadian newsrooms are using and adopting AI in their editorial practices. Three practical approaches to responsible AI adoption have emerged for media leaders:
Trust-first guardrails
Our findings suggest a growing divide based on the size of outlets. Larger ones, like The Canadian Broadcasting Corporation (CBC), The Globe and Mail, and the Canadian Press have robust guardrails and policies. Smaller outlets, constrained by time and resources, often rely on informal oversight. This relegates ethical boundaries to individual intuition rather than documented standards.
In a recent HEC Montreal study of over 400 journalists, 36% of those surveyed were unaware if their organization even had an AI policy. Thus, publishers face an operational challenge not in drafting policies, but in driving them clearly and consistently from the executive corner office to reporters’ desks.
Small newsrooms don’t have the budgets of national broadcasters or wire services. So, they are forced to create simpler, more practical models. At Cabin Radio in Yellowknife, for example, editor Ollie Williams says that AI experimentation “happens so far off the side of the desk that it’s like the movie Inception and it’s like the desk has folded back in on itself three times before I get to it.” Therefore, he doesn’t have time to research AI uses and meet with staff to develop formal policy because he’s too busy running day-to-day operations with his editorial team of four.
For resource-strapped newsrooms, a simple governance model can start with a clear approval process, requiring editor sign-off for all AI use. Newsrooms should actively disclose when and how they use AI, prioritize transparency with audiences, and train staff on verification rather than technical skills.
For 2026, the goal for smaller publishers isn’t to draft a 50-page manual. Rather, they should try to establish living documents that provide clear guidance for daily tasks. As DCN outlined, “staff need clarity about when and how to use AI and how to validate its output.”
Upskilling and in-house training
The media organizations we spoke with weren’t hiring engineers with a surface-level interest in news for editorial purposes. Instead, they are conducting in-house training to fill the tech gap. The focus has been on upskilling existing staff who already understand the brand’s voice, ethics and audiences is more effective than bringing in tech-first hires who may lack journalistic DNA.
Tools can be taught in a controlled environment. At the CBC, for example, they aimed to train every employee, from summer hires to 30‑year veterans, with a full‑day AI program adapted from Radio‑Canada. This approach keeps AI adoption grounded in newsroom culture, not vendor experimentation.
Priority on core journalism skills
Across the board, editors-in-chief emphasized that AI expertise comes second to strong reporting fundamentals. More than the ability to write a clever prompt, they were looking for curiosity, critical thinking, strong judgment. They also focused on an ability to interview people, build sources, and find good stories. All of which are fundamental skills that define quality journalism.
Toronto Star editor-in-chief Nicole MacIntyre told us that while the next generation will, “absolutely need to embrace the tools that can help them work smarter and more efficiently… their success will still depend on having the core skills to adapt in a fast-changing media environment: curiosity, critical thinking, strong judgment, and a commitment to truth.” Ultimately, AI fluency matters, but only on top of reporting fundamentals.
Considerations before rolling out ambitious AI products
The test for managers and boards this year lies in day-to-day governance: Are newsrooms giving editors clear boundaries on generative AI, or leaving it to gut instinct? Before scaling AI deeper into editorial routines, here are pointed questions to ask:
- Do frontline editors know the exact off-limits line for generative AI, or are we relying on vibes?
- Are we allocating time and tools to verify AI-assisted content, or prioritizing speed over trust?
- Do budgets support upskilling current staff on AI literacy, or are we waiting for perfect new hires?
- With a third of the industry unaware of AI policies, what steps turn intranet PDFs into daily habits?
Publishers entering 2026 face practical choices about how AI fits into editorial workflows. The Canadian newsrooms we spoke with are moving cautiously, focusing on guardrails, staff training, and core reporting skills rather than rapid experimentation. Their approach suggests that AI in journalism will be shaped less by hype than by the daily realities of newsroom capacity, oversight, and editorial judgment.

