Login
Login is restricted to DCN Publisher Members. If you are a DCN Member and don't have an account, register here.

Digital Content Next

Menu

InContext / An inside look at the business of digital content

Media organizations grapple with developing AI policies

June 28, 2023 | By Jessica Patterson – Independent Media Reporter

Generative AI tools have a lot of power to change the media business—from the way we work, to the data we collect, to audience needs and expectations. The area is complex, and changing so rapidly, that navigating the landscape requires a roadmap—policies that outline how media companies will use AI

AI has been hard at work in media organizations for decades. However, as we have explored, generative AI has broader capabilities, can generate more nuanced language, and open-source models make it widely accessible. It is capable of creating content, images, audio, music, code, and could be a valuable tool for collaboration.

Given that the generative AI space is changing underfoot, DCN checked in with six media organizations—Harvard Business Review, The Weather Company, Consumer Reports, The Washington Post, Skift, and The Boston Globe—to explore how they are developing AI policies and internal guidelines for AI usage and what these look like. 

Media companies can’t afford to wait and be reactive somewhere down the road, when AI has already dramatically altered the media business, pointed out Maureen Hoch, editor of HBR.org and managing director of digital content at Harvard Business Review. 

“This is not a wait and see moment,” Hoch said. “You really need to be thinking about how to help both editors and product managers understand the role that these tools play, where the guardrails are. And, in a bigger sense, understand how they could affect your audience and your product.”

If your organization has not already begun this process, the time to start is now. Media companies must develop guidelines, rules, regulations, principles, values, and intent around the use of generative AI. Yet, like the technology itself, these policies and guidelines will continue to evolve. Think of the process as creating a living document that can be updated as necessary.

Building an AI task force

Certainly, editorial leaders don’t have to create these policies alone. Developing an AI policy is best done with the inclusion of a range of stakeholders with different areas of expertise, from editorial, technical, business, and legal backgrounds. This team, task force, or working group should bring diverse perspectives, which are needed to create a robust and comprehensive set of guidelines or policies.

At the FIPP Congress earlier this month, Trusted Media Brands CEO Bonnie Kintzer said that they have set up a task force to learn about AI and identify how to use AI tools. While they understand the risks, she said, they are embracing AI to grow their business

The Harvard Business Review also established a cross-functional team to create an AI policy.

Hoch describes the challenges, and importance, of including a wide range of roles in the process. 

“There’s always the risk of trying to come up with a policy with so many different stakeholders; that can make it hard,” Hoch said. “But I think everyone understands the urgency and the importance of having a policy around these tools.”

She offered advice about the makeup of an AI working group. “You want whoever your stakeholders are from IT security, legal, and from the business side. Make sure everybody has a chance to weigh in because this will serve you best for the long run.”

Robert Redmond, IBM design principal, AI ethics focal and VP of AI ad product at IBM agrees that it’s important to include as many perspectives as possible.

“The creation of a working group is essential to the development of fair and balanced AI policies,” he said. “A group should not only consist of executive stakeholders but should consider roles and positions from across the company as well as sourcing a varied and diverse group of people. An approach grounded in diversity will help to contribute to the formulation of policies with fairness and healthy debate from multiple points of view.” 

The Washington Post’s approach to AI is measured and strategic. They have been working on AI for years behind the scenes, but just announced two AI teams in May. They are leveraging their resources to study AI and created an AI Taskforce and AI Hub to chart a path forward.

“As we navigate this space, we have the active involvement of our entire masthead every step of the way. Our executive editor Sally Buzbee has said that we are going to be bold in embracing innovation but that we’ll do so with the highest adherence to our standards,” explained Justin Bank, managing editor at The Washington Post.

The AI Taskforce is comprised of senior leadership from key departments including news, product, engineering, legal, finance and others, to provide high-level oversight of The Post’s direction. “This group is tasked with establishing company priorities and leading strategic guidance around the governance of this tech and adoption possibilities,” Bank said. 

The second group, The AI Hub, is a cross-departmental operational team that will spearhead experimentation, collaboration, and proof-of-concept AI initiatives, Bank said. “The AI Hub is dedicated to active exploration and experimentation with adoptions. This group is also establishing the space for those across with an interest in AI to contribute ideas.”

A living document

Given the complexity and rapidly expanding generative AI landscape, an AI policy should be flexible and continue to evolve and change over time. 

HBR describes its policy as a living document, which they continue to update, collect comments on, and communicate with the whole organization. Every member of staff has access to the AI policy information.

The Boston Globe is in the process of putting together what it believes is a fully holistic AI policy. However, it needs to be flexible enough to grow with the company, explained Matthew Karolian, general manager of Boston.com and platform partnerships, who is co-leading AI strategy with Michelle Miccone, vice president of innovation and strategic initiatives. 

“The ultimate aim is to have a policy that is comprehensive enough to be all-encompassing but also legitimately useful in day-to-day work. It needs to be understandable and actionable, clear to the folks internally, and then maybe offer externally focused policies so that our readers and users can understand how these technologies are used,” he said. 

The Boston Globe has been testing the use of AI internally, which is informing their policy creation. The testing program allows staff to opt in to be part of the testing, and provide feedback on a daily basis. 

“We’re not launching these products into the wild and to our users until we’ve really gotten a really clear understanding of all of the different elements to these projects based on an internal audience,” Karolian said. “We don’t want to take any shortcuts. In this way, it’s impacting how products are built, it’s impacting how products are positioned, it’s impacting policy.”

As longtime users of AI, The Weather Company (and parent company IBM) has had AI policies, guidelines and guardrails in place, with ongoing development, for the better part of four decades, explained Rachel Chukura, head of consumer business and subscriptions at The Weather Company.

“A policy in a rapidly evolving area like AI, which has been evolving at speed for a long time, will need to be under constant review as it needs to keep up with both the advancement of the technology and the future potential of the technology,” said Chukura.

Policy structure and statement

Media companies’ AI policies generally have similar components: they include introductions, policy statements, sections on scope and applicability and accountability.  While the specific structure varies depending on the organization, AI policies generally incorporate core principles that include what companies believe at a principle level about AI, and other sections covering rules or beliefs around transparency, accuracy, accountability, and safety or security.  

Below, we’ve outlined some of the most common elements or components we’ve seen included in AI policies thus far. 

Transparency 

One of the common elements in AI policies thus far is a principle or section on transparency. Transparency is key for media companies using AI, because it builds trust between companies and their audiences. 

What this can look like is responsible disclosure–openly and transparently sharing information about the implementation, capabilities, and limitations of AI. 

“Exactly what responsible disclosure means, I think there are going to be many different interpretations throughout the industry,” said Glenn Derene, senior director of content development at Consumer Reports. “I hope we’re bending over backwards trying to not only say when we are using generative AI, but literally helping the reader understand exactly how we’re using it. We want readers to always know who and how something was produced.”

Media companies need to be explicit in transparency. And they should put the disclosure in a place where it is recognizable and understandable, rather than leave it vague or as an easily overlooked design element. 

“It’s important to explain how you used generative AI to do something and, and put it where the reader can see it so that you’re not hiding anything, but you also don’t want to make it invert the subject that people came there for,” Derene said. “It all boils down to not being sneaky. We don’t want to be sneaky with our readers.”

“One of our rules is that when we use AI-generated content, it should always be disclosed,” said Jason Clampet, chief product officer at Skift.”We should never use AI tools to create something that we pass off as non-AI.”

For example, Skift recently posted an article about Lionel Messi. Then, they used their Ask Skift chatbot to write copy describing past Skift coverage on influencer marketing. Ask Skift is an AI chatbot, which was built on top of OpenAI’s GPT-3.5 and trained on 11 years of Skift content. It answers questions related to the travel industry. 

An excerpt from a sample of “Ask Skift” content.

“We presented it so it looks like it is in the chatbot itself,” Clampet said. “We didn’t make it look like a news story. And so it’s clear to users, this is from Ask Skift. The benefit for that is its additional content, but it also is teaching people how to use our AI tools, and so, in a way it’s both good content and good marketing and it doesn’t hide the fact that we’re using it for this.”

As an industry, Karolian from the Boston Globe believes that “it is likely that we’ll eventually settle into a set of fairly universal disclosure language and/or UX signals so that users can very clearly understand at a glance when they encounter generative AI content.”

Being transparent allows audiences to make informed decisions about the content they consume. Some companies’ AI policies have even suggested they would provide audiences with tools to assess trustworthiness of content. 

The ethics of AI

The Weather Company has three principles for working with AI: that the purpose of AI is to augment human intelligence, that data and insights belong to their creator, and that technology must be transparent and explainable. 

At The Weather Company, both its AI Ethics charter and approaches to privacy, ethics and security by design provide guidelines and principles by which teams center tactics and strategies, explained Chukura. 

“We also have internal processes across our product creation lifecycle that provide governance and support from experts within brand/business legal, privacy legal, security and compliance practices,” she said. “It has also become part of our DNA, our culture, that we leverage AI in ways that are ethical, explainable, accurate and verifiable because our consumers and clients depend on the outputs.”

Accuracy and accountability

Media brands’ AI policies also focus on core journalistic principles of accuracy and accountability.

A human is always accountable for the work, Hoch said. “It might be a tool, but I don’t think it’s a replacement for the subject matter expertise that we and our authors bring,” she said. 

“From a principles point of view, we still believe that subject matter expertise of our authors is the most important part of our content. That’s not something that we expect is going to be generated by a chatbot or generated by a tool,” Hoch said. “That’s a really important part of what our audience trusts us to do and what we trust our authors to do.”

“We’re not doing anything that doesn’t have a human in the loop, because until we can start guaranteeing a level of accuracy that is consistent with our brand, we don’t want to put out a tool that is going to give people bad information,” Derene said. “That’s damaging to us as an institution.” 

AI does make mistakes–this is one of the points in Skift’s AI policy, Clampet said. 

“Rafat (Ali) and I call it kind of an unreliable freelancer, where you will just want to double check the work all the time,” Clampet explained. 

For The Weather Company, it is imperative that the data that influences the content and data that they deliver is trustworthy, Chukura explained. Of course, this would include the data that generative AI uses to produce any information, as well as what is ultimately presented to the audience. 

“This means that the sources for the systems, specifically large language models and generative frameworks, must be trained on permissioned, licensed, and trusted data sets or through means that are auditable at scale,” she said. “As we explore the generative landscape and experiment with ways to bring the latest AI models to bear, we do so with caution, ensuring our key mission to our partners and consumers is upheld.”

For example, The Weather Company uses AI-based technologies to generate text samples, but they are trained on confined frameworks that are auditable and utilize content that has been internally sourced, or that in some cases may be provided through a licensed partner, according to Chukura.

Images

Governmental policy makers can’t move fast enough when it comes to AI-generated imagery. The European Union is leading the way with the upcoming AI Act, which would comprehensively regulate artificial intelligence, though in the U.S., any images produced by AI image generators cannot be copyrighted

Media companies have concerns about the copyright of outputs of text-to-image AI models, especially when AI image generators – like Stable Diffusion, Dall-E and Midjourney – are trained on images scraped from the web. Getty Images even went so far as to ban AI-generated content last year

AI image and video tools aren’t to be used for news photography, according to The Globe and Mail’s guidelines on AI. And, if an image is produced by Midjourney or DALL-E, “it should be credited as ‘AI-generated image’ or ‘AI-generated illustration.’

They’re not the only ones to include guidelines about images and video in their AI policies.

“HBR in general feels really strongly that we respect the work of artists and photographers and the human element of what they bring,” Hoch said. “There may be moments where the story calls for an AI-generated image, in which case we would disclose that to the audience. But as a general rule right now, our design teams aren’t using tools to generate AI images.”

Before creating their AI policy, Clampet explained that Skift staff had general rules about not using AI for news images. That rule went into their AI policy. However, he allowed that staff could use AI to illustrate something–like a report on AI sentiment–and label it as such. 

Data, safety, and legal

As exciting as they are in terms of potential, AI tools raise many questions: Are there any restrictions on data collection for these companies? If AI-generated works are not covered by copyright, who owns them? How much generative-AI involvement in the creation of materials is too much before it becomes ineligible for copyright protection? 

It can be difficult to know where to start with issues like copyright law still in flux, Derene said.

“Some of these things are really, really difficult to sort of find your way through,” he said. “You do get into questions about who’s the author of something, when you had a writer use AI to do things like create a prompt. In some respects, when is AI a tool and when is it the author?”

The U.S. Copyright Office won’t register works produced by a machine. However, if the work contains sufficient human authorship, it will. The courts are sorting out how exactly the law should be applied. And cases brought before them include questions about intellectual property infringement. Getty, for example, filed a lawsuit against Stability AI, accusing it of misusing more than 12 million Getty photos to train its generative AI.

“If a business user is aware that training data might include unlicensed works or that an AI can generate unauthorized derivative works not covered by fair use, a business could be on the hook for willful infringement,” wrote Gil Appel, Juliana Neelbauer, and David A. Schweidel for Harvard Business Review.

There are also security risks around using generative AI, from data privacy to malware and enabling misinformation

Media companies will want to work with their IT and legal teams to identify risks and develop policy measures and other actions to mitigate those risks. For example, ChatGPT, may have a privacy policy. However, it collects IP addresses, settings and browser types, and users’ browsing activities.

IBM’s The Weather Company takes a global approach to trustworthy AI, which cascades down to every aspect of their business, Chukura explained. “We consider these foundational aspects of AI utilization within everything from our consumer products, advertising services, data products and approach to editorial content,” she said. 

“Our process includes AI ethics, and transparency and privacy checkpoints across the features we develop. We consider the implications of all AI solutions and ensure that the applications and productization of AI successfully and safely uphold our core mission of informing decisions and helping keep humans safe.”

Opportunities abound

Another component to include in an AI policy is a section on opportunities. It’s a big task to figure out an AI policy. Simply focusing on rules, regulations or proscriptions may inadvertently set limits and doesn’t allow for a balanced and comprehensive framework that encourages experimentation and innovation. 

“It’s good to include both opportunities and risks,” Hoch explained. “I think if you’re just focusing on don’t do this, don’t do that, it builds up a lot of fear and apprehension.” 

“I’ve gotten some feedback on this from our internal teams. The fact that we both highlighted risks and opportunities was helpful to them in having a point of view of this is something that’s going to continue to change. It’s going to continue to change our business, and we need to figure out how it’s going to work best for us,” she said.

AI can be a helpful tool, Clampet stressed. For example, it can synthesize blocks of data, suggest headlines, enhance brainstorming exercises, and speed up production.  

“Basically, we kind of think of it as a bit of a steroid,” he said. “There are really helpful things that AI can do, and for organizations to really take advantage of them as opposed to kind of being scared of them. But also at the same time, we need to recognize the things that it can’t do, like replace decent reporting, which is hard.”

Contributor guidelines

While creating rules and guidelines for staff are critical, it is also important to consider updating contributor guidelines with regards to AI use. 

Harvard Business Review, for example, updated its external contributor guidelines. It was important to clearly state that, though HBR understands their contributors may want to use generative AI tools, they would like to be informed whether and how the authors are using them. They stressed that the human authors are responsible for the accuracy and integrity of the work, Hoch explained. 

The big picture

The industry is at the starting line of another marathon. Like the advent of the internet or social media, generative AI is a powerful tool that needs to be understood and used effectively and responsibly by media companies.

“I think you want to make sure that wherever you use it, it adds value for the reader,” Derene said. “You should always be thinking of how to add value for your readers, for your customers and not take away from the value of your brand and also use AI in a way that leverages your brand value,” he said. 

Big questions loom large: What happens if those with questionable – or even the best – intentions distribute AI-generated content and don’t disclose? What sort of ripple effect does that have on the rest of the industry? What happens to audience trust? At the same time, if low-quality or false content is even easier to produce and proliferate, how will this impact publishers’ engagement and revenue?

Clearly, the media industry needs to think and act responsibly about generative AI and its applications. We need to ensure that generative AI is a tool that is used to support the mission of media organizations, to augment and enhance. At the same time, the industry must strive not to be distracted or overwhelmed by the rapidity at which the tool changes. This is where policy and guidelines help.

“It’s incredibly important that as an industry, we are able to come to some level of standardization around AI policy and disclosure, so that it’s not something that a user has to think about every time they visit a new site,” Karolian said.

And, as ever with major technological shifts, there is an opportunity at the starting gate for those who get out in front.

And, as generative AI tools are going to continue to change and evolve, having a policy or rules governing the use of AI gives media teams guidelines and guardrails. Ideally, this will be a policy that is both clear and flexible, which fosters transparency, accountability and accuracy. Setting guidelines for your staff also safeguards against risks, potential bias and misuse of generative AI while demonstrating a commitment to embrace emerging technology in a way that is responsible and accountable to audiences.

Editor's note: The opener image was created using Tome, an AI art generation tool.

MEDIA AI GUIDELINES: A STARTER TEMPLATE

Based upon conversations with the media leaders interviewed for this article, here’s a rough template to get you started as you develop your own AI policy document. 

Liked this article?

Subscribe to the InContext newsletter to get insights like this delivered to your inbox every week.