Image Credit: Wall Street Journal
Anyone who watched the doctored video of congresswoman Nancy Pelosi in which she appears drunk—a clip that quickly went viral a few months ago and which Facebook refused to delete or flag as bogus—can appreciate how quickly and suddenly the world is changing. That unsettling event marked a turning point for many, one that signaled the dawn of the deepfake era.
While the Pelosi video was only mildly altered, it turned attention on the rising issue of deepfake videos. For the uninitiated, deepfakes use artificial intelligence and software to manipulate video and audio to such a clever extent that it becomes increasingly difficult to vet content and determine its authenticity. While deepfake videos and the means to create them have actually been around for a while—particularly in the form of counterfeit celebrity pornography and revenge porn—they’ve increasingly captured the public’s attention.
And that’s for good reason: People are
worried about the damage deepfakes can do and the inability of the news media
and the government to curb this disturbing tech trend. With a national election
looming and tensions between nations simmering, combined with the ability for
bad actors to sabotage these delicate circumstances, the stakes couldn’t be
higher.
Seeing is not
always believing
“We have seen deepfake technology improve dramatically over the past year, and it’s getting harder to identify AI-synthesized media,” said Till Daldrup, training and outreach programming coordinator for The Wall Street Journal. “We’re observing two unsettling trends. First, deepfake technology is being democratized, meaning that it’s getting easier for forgers to create convincing fakes. Second, several startups and internet users are commercializing deepfakes and offering to produce AI-synthesized fakes upon request for money.”
While he can’t name a major national news organization being hoodwinked by a deepfake to date, “we have already seen journalists from other news organizations become targets of disinformation campaigns, including deepfakes that are trying to humiliate, intimidate, and discredit them,” said Daldrup. He cited as an example the notorious deepfake of Indian journalist Rana Ayyub.
Mike Grandinetti, global professor of Innovation and Entrepreneurship at Hult International Business School, said the number of deepfakes has risen exponentially over the past three years as access to new tools has grown.
“Today, powerful new free and low-cost apps like FaceApp and Adobe’s VoCo enable anyone to manipulate facial images and voices, respectively,” said Grandinetti. “Think of this as fake news on steroids. As a result, deepfakes are becoming increasingly problematic for the news media.”
Deborah Goldring, associate professor of marketing at Stetson University, suggested that it’s easy for the public to be fooled by this fraudulent content. “According to a Buzzfeed survey, 75% of Americans who were familiar with a fake news headline perceived the story as accurate,” she said. “And because social media makes it so easy, consumers are more likely than ever to share news with their network, which further spreads inaccurate information.”
The fourth estate
fights back
What can the news media do to deter the decepticons,
call out the counterfeits, and prevent further Pelosi-like phonies from hogging
the headlines? Plenty, say the pros.
“Increasingly, it felt like the news cycle would become dominated by a clip of video that was false in some way. Our reporters were spending more time telling stories about manipulated video and the ways it was used to attempt to sway political opinion,” said Nadine Ajaka, senior producer of video platforms for The Washington Post. “Politicians, advocacy groups, and everyday users are sharing manipulated video, and there is a sense that the public can’t trust what they see. It feels a bit like the wild west.”
Consequently, Ajaka and the Post’s fact checker Glenn Kessler and video verification editor Elyse Samuels helped create a classification system for fact checking online video and a guide designed to assist consumers and journalists alike in navigating the treacherous trail paved by deepfake videos. The guide defines key terms and organizes the problem into the categories of missing context, deceptive editing, and malicious transformation.
“Fact checking video is different than
fact checking a statement. You’re not just parsing words but many factors—images
and sound—across the passage of time. With this initiative, we can give
journalists a shared vocabulary they can apply to the different ways video can
be manipulated, so their readers can be more informed about what they’re
viewing,” Ajaka added.
The Wall Street Journal, meanwhile, has
formed a committee of 21 newsroom members across different departments that
helps reporters identify fake content online. “They know the tell-tale signs of
AI-synthesized media and are able to spot red flags. Each of them is on call to
answer reporters’ queries about whether a piece of content has been manipulated,”
said Daldrup.
“We want to be proactive and are raising
awareness for this issue inside and outside the newsroom. We are also
collaborating with universities and researchers and are constantly
experimenting with new detection tools and techniques.”
Other weapons in
the war on deepfakes
Daldrup said there are several promising
detection methods emerging, most of which have been developed by researchers at
universities like UC Berkeley and SUNY; the Defense Advanced Research Projects
Agency (DARPA) has also been trying to perfect machine-learning algorithms that
can spot deepfakes.
“A potentially significant component of any meaningfully effective solution would likely include blockchain technology,” said Grandinetti. “Trupic, a photo and video verification system, uses blockchain to create and store digital signatures for authentically shot video as they are being recorded, making them easier to verify later.”
Additionally, partnering with media
forensics experts at universities and other research institutions that have
access to these tools can be a smart strategy for newsrooms.
However, “this is an ongoing cat-and-mouse
game. Every time researchers come up with a new detection method, forgers will
alter their techniques to evade being caught,” Daldrup said.
That’s why editors and reporters will need
to work harder and rely on proven vetting methods. “Journalists must
collaborate to better verify sources through third parties,” recommended
Grandinetti. “And they need to spread the message to news consumers that they
need to be much more on guard with healthy skepticism when a video strains
credibility, too.”
The right approach for a news organization
depends on its size and mission, Daldrup suggested. If you’re a journalist, “in
general, it is good practice to think twice before you share video of unknown
provenance on social. It is also helpful to monitor research on deepfakes in
order to stay on top of the latest developments in the field.”
Lastly, remember to “share best practices
with the rest of the news industry,” added Daldrup. It is essential that the
entire media industry keep pace with issues that threaten the quality of news
and the public’s trust.