A trusted reputation is crucial for publishers. And, despite the clickbait appeal of fake news, most people value accuracy and truthful content. Research has confirmed time and again that people want to engage more with articles shared by trusted journalists and media outlets, especially on social media.
89% of Americans believe it is “very important” for a news outlet to be accurate and 86% that it is “very important” that they correct their mistakes (Knight Foundation, 2018),
85% say that accuracy is a critical reason for why they trust a news source (The Media Insight Project, 2016).
63% of Americans say they have stopped getting news from an outlet in response to fake news.
However, fake news and low-quality content are still prevalent, amplified, and generating tremendous engagement. Misinformation is not limited by reality and often feeds natural individual biases.
Fighting fake news
The question remains where do publishers place their efforts in the fight against fake news? New research from Alberto Acerbi, Sacha Altay, and Hugo Mercier, Fighting misinformation or fighting for information? explores whether publishers should fight the spread of misinformation or support the trust in reliable sources. Interestingly, the researchers found that it is just as likely that consumers will accept articles and sound bites of fake news reporting as it is for them to reject a piece of accurate news reporting.
The authors developed a model that estimates the effectiveness of increasing the acceptance of reliable news compared to decreasing the acceptance of misinformation. The model includes two main parameters: the share of misinformation compared to the share of reliable information and the tendency for individuals to accept each type of information.
Reliable information refers to news shared by sources that, most of the time, report news accurately,
Misinformation refers to news shared by sources that regularly share fake and deceptive information.
The model provides a baseline view of the informational environment and offers an approximate index of its quality. Using these broad definitions, the model design includes 5% of misinformation as people’s news diets, with the remaining 95% consisting of information from reliable sources. Importantly the model captures the main elements of an informative environment: the incidence of reliable information compared to misinformation and the tendency to accept each type of information.
The model computes a global information score. The calculation represents the share of accepted pieces of reliable information minus the share of accepted pieces of misinformation.
Simulated exposure
A small sample of individuals simulated in the model were exposed to both reliable and fake news. A larger sample of individuals simulated were exposed only to reliable news. The researchers tested different intervention rates of reducing the acceptance rate of misinformation compared to increasing the acceptance of reliable information.
The researchers analyzed the different interventions rates. The basic simulation illustrates that, even with a 10% incidence of misinformation, improving the acceptance of reliable information by 3% points is more effective than bringing acceptance of misinformation to zero. Therefore, interventions that increase the acceptance of reliable information have a greater effect than interventions on misinformation.
Acerbi, Altay, and Mercier demonstrate the importance understanding the impact of different interventions in the informational landscape. Publishers that place their efforts on increasing the acceptance of reliable information will have a greater effect in the fight against fake news.
These findings do not dispute the many efforts to fight misinformation. However, given publishers’ limited resources, more efforts should be dedicated to increasing trust in reliable sources of information rather than in fighting misinformation.
Trust is a crucial driver of consumer engagement, especially in news reporting. Ed Williams, the CEO, Edelman, UK, and Ireland, notes that trust closely correlates to our sense of happiness. “The amount of trust that people are able to place in the institutions that govern or inform their lives accords closely with their sense of happiness,” he elaborates. “So, in a broad, societal sense, it matters whether or not our media is trusted.”
Unfortunately, recent studies show trust in the news media continues to decline. Edelman’s Trust Barometer 2021 shows trust in all news sources is at a record low with social media (35%), owned media (41%), and traditional media (53%). Gallup research finds trust in the news media news at 36%, down four percentage points from 2020.
In a new report, The Reuters Institute’s lead researchers take a deeper dive into the questions of trust in the news media. Specifically, Reuters explores how the news media can build trust with its audience. The report showcases discussions with 54 individuals from a mix of small, local, and niche online publications to large, industry-leading brands in the US, UK, Brazil, and India.
Concerns in the newsroom
The report details journalists’ frustration with their newsrooms’ inabilities to build trust with the public. Smaller media organizations also spoke about their concerns and their lack of control over the way audiences interacted with their brands.
They also view Facebook, Twitter, Google, WhatsApp, and YouTube negatively and believe these platforms cause increasing distrust in the news media.
Breadth and depth face fierce competition
Journalists believe the quality and depth of their reporting are the main reasons audiences trust their news organization. However, many news media companies look to social platforms for scale. Doing so often adds attention-grabbing headlines and a disconnect with the content.
Rohan Venkat, Deputy Editor at Scroll (India) responds, ‘It’s something that we find quite hard, and we have to keep innovating in trying to convey that the format, the medium, is more complex than just what the headline contains.” Many journalists are less interested in chasing after reach and scale on platforms and want to build a strong relationship with their audience.
Finding your audience
Maintaining a strong connection with loyal readers is a priority for most newsrooms. However, publishers often disregard harder-to-reach and wary audiences. It doesn’t help that there are few incentives to build trust with an uninterested – and sometimes antagonistic – audience.
Many journalists feel it may be easier for publishers to change the minds of readers resembling their audience. However, if those most critical of the news media and its journalists are left untouched, they will continue to spread distrust in media news.
The research participants identified strategies to help build trust with audiences. The strategies include:
Maintaining a focus on accuracy. Differentiating fact from opinion is critical in building and keeping audience trust. News publishers should use fact-checking as a key differentiator.
Using editorial initiatives to cater to audiences who are underserved, overlooked, or criticized by the press.
Ensuring transparency about reporting practices, editorial stance, and journalists’ backgrounds. Disclosing the identities of those producing the news can also help.
Engaging in partnerships with other news or civic organizations. Offering practical advice to an audience centered around consumer products, recipes, and other information relevant to daily life.
Journalists and news publishers are finding ways to stand out and engage audiences. Creating, experimenting, and analyzing content in different formats, such as audio, visual, and text can offer insight into the impact on trust and distrust. Notably, the publishers need to incentivize the content most likely to build trust with audiences.
Conspiracy theories have a long history in society, and social platforms offer a fresh new breeding ground for them. Yannis Theocharis, Ana Cardenal, and Soyeon Jin examine which social platforms effectively propagate fake news and conspiracy theories. Their study, Does the platform matter? examines the relationship between social media platforms and the spread of Covid-19 conspiracy theories. The analysis focuses on Facebook Messenger, WhatsApp, YouTube, Twitter, and Facebook, in a two-wave study across 17 countries.
Symmetrical or asymmetrical followership
The research correlates the spread of conspiracy theories to the construct of a communication environment. In other words, it looks at the structural features of followers within each platform. The authors classify a structural design as either symmetrical or asymmetrical followership.
Symmetrical followership environments imply that information is shared with friends and not with strangers. The primary use of this type of platform, like Facebook, is for socialization and entertainment; people predominantly follow people they already know. People see a symmetrical way of connecting as safe and in a comfortable environment — they know the connection points well. Facebook, Messenger, and WhatsApp connections are socially homogeneous.
An asymmetric follower structure runs on weaker social connections, and it’s a follower-based platform where most exchanges between users occur publicly. Twitter is an example of an asymmetric social platform. It is more heterogeneous, mixing in political views and information knowledge.
Interestingly, YouTube’s follower structure is primarily tied to people’s interests and not based on close social connections. Like Twitter, YouTube has an asymmetrical follower structure. Recommendations, including algorithmic ones, create new connections. YouTube’s design also encourages content generators to build audiences and promote themselves. This asymmetrical follower design makes it easy for YouTubers to share content around fringe ideas with their followers.
Research hypotheses and analysis
The researchers developed four hypotheses to test:
There is a negative relationship between using Twitter for news and holding conspiracy beliefs about Covid-19.
There is a positive relationship between using Facebook for news and holding conspiracy beliefs about Covid-19.
There is a positive relationship between using YouTube for news and holding conspiracy beliefs about Covid-19.
There is a positive relationship between using messenger services (Facebook Messenger and WhatsApp) for news and holding conspiracy beliefs about Covid-19.
The researchers conducted a two-wave panel survey. Both waves measured core independent variables (use of social media platforms and messenger services) and controls. After the outbreak of Covid, wave two measured the dependent variables — six Covid-19 statements, three were related to conspiracy theories about the origin of Covid-19.
Depending on their affordances, platforms influence conspiracy theory beliefs (CTB) differently. More precisely, Twitter will have a negative effect on CTB, while Facebook, YouTube, and messenger services (WhatsApp and Messenger) will all have positive effects on conspiracy theory beliefs.
The analysis shows the relationship between social media platforms and messenger services with conspiracy theories. The platforms’ interactive and networking features supply active environments to spread conspiracy theories, and some social platforms offer more effective settings than others. The findings show that usage on Twitter is less effective and negatively affects conspiracy theory beliefs, reducing it by 3% on the conspiracy scale. In contrast, Facebook, YouTube, Messenger, and WhatsApp positively increase conspiracy theories, between 3% and 5%.
The takeaway
Social platforms offer different architectural features and consumer relationship designs: symmetrical and asymmetrical. Follower designs affect how consumers interact with conspiracy theories and intensify their beliefs. Understanding the spread of conspiracy theories and how it differs across social media platforms and messenger services offers insight into strategies to combat this situation.
Journalists strive to present fair and balanced reporting. If they do their job well, they become respected authorities of the subject matter. In today’s environment of fake news and misinformation, this is both extremely challenging and important. So, how can they best do their jobs and break through the clutter? How can they rise above noisy false rhetoric and rebuild consumer trust in the media?
To explore these critical issues, researchers Hong Tien Vu and Magdalena Saldaña conducted a nationally representative survey among U.S. journalists. The study examines how journalists are evolving their practices given the increased volume of misinformation.
Evolving practices:
Double-checking sources more often.
Limiting the use of anonymity and identify source of information.
Including verification of information (i.e., data, raw footage) and incorporating into their content.
Importantly, journalists today offer increased transparency into their work. They want to connect with audiences on a new level of accountability. Harvard Kennedy professor, Thomas Patterson, offers similar insights in his book, How America Lost Its Mind. As Patterson states, “More harmful to our democracy is a cousin of conspiracy theories — misinformation. It also involves fanciful ideas about the actual state of the world, but it is far more widespread and a far greater threat.” Patterson believes journalists are gatekeepers of information. They have a responsibility to weed out false facts and call out false reports or unreliable sources.
The research from Vu and Saldaña demonstrates how journalists are trying to reengage with audiences to build trust by offering objective and accurate reporting. These efforts help to counterbalance the fake news and misinformation amplified on social media.
Journalists recognize fake news as a direct attack on our democracy. Their efforts in transparent practices are a helpful solution to curtail misinformation. Further, reporters with a strong base of online followers feel they have a responsibility to provide accurate information to their social media feeds. They also feel it is their duty to point out fake news and misinformation on social platforms.
This study offers an important understanding of journalists renewed focus on accuracy in the transformation of information. It illuminates opportunities for self-disclosure and social exchanges between the reporter and the consumer — be it a reader, listener, or viewer. This open exchange is vital in helping readers recognize quality journalism and premium publishers.
Social media continues to grapple with the spread of misinformation on their platforms. And consumers know this. Regardless, they continue to use social media as a primary news source. According to the most recent Pew Research Center survey, more than half of U.S. adults (53%) report that they get their news from social media “often” or “sometimes.” The survey was taken by nearly 10,000 U.S. adults.
News resource
Facebook ranks highest (36%) as the number one news source consumers use regularly among 11 social platforms. YouTube ranks second at 24% and Twitter ranks third with 15% of adults regularly getting their news there. Fewer consumers say they get their news regularly on Instagram (11%), Reddit (6%), Snapchat (4%), LinkedIn (4%), TikTok (3%), WhatsApp (3%), Tumblr (1%), and Twitch (1%).
Accuracy
Interestingly, despite the fact that they often find their news on social media, consumers question the accuracy of the news they get on these platforms. Approximately six in 10 consumers (59%) say that they expect the news on social platforms to be largely inaccurate. Unfortunately, the data shows little change over the last three years. Even after two congressional hearings, there’s still an abundant amount of vaccine, Covid-19, and the 2020 presidential election misinformation on social media.
Social media does little to help consumers interpret the news. In fact, less than one-third (29%) of consumers believe the information they received on social platforms helps their understanding of the news. Further, 23% believe the news on social media leaves them more confused and 47% report that it doesn’t make much of a difference.
More women than men (63% vs. 35% and 60% vs. 35%, respectively) use social media to access their news. However, Reddit has a distinctly different demographic. Among its regular news consumers, two-thirds are men compared to women (67% vs. 29%).
Combating misinformation
Consumers use social media as an easy and accessible path to news and information. However, this Pew study clearly shows consumer are aware of misinformation on social media. Increased awareness is a good thing and an important step to expose and defuse misinformation.
Social platforms continue to try to combat misinformation with fact-checkers and other programs. Twitter launched a new program, “Birdwatch,” which allows Twitter users to comment and provide context on tweets that they believe are misleading or false. Unfortunately, none of these programs are winning the fight against misinformation. A recent investigation of Facebook found 430 pages with 45 million followers monetizing misinformation with Facebook tools. Clearly, more needs to be done to stop the dissemination and monetization of misinformation on social platforms.
Deepfakes, manipulated videos synthesized by deep learning, are the newest tools in the misinformation arsenal box. Easily accessible via open-source applications, they offer a cheap and efficient way to create deceptive videos of public figures. How powerful are deepfakes? New research finds that misinformation consumed in a video format is no more effective than misinformation in textual headlines or audio recordings. However, the persuasiveness of deepfakes is equal and comparable to these other media formats like text and audio.
Test 1
The research used two tests to measure the effectiveness of deepfake messaging among 5,750 respondents. The first test was conducted in a social media feed environment, surrounded by regular social media content.
Respondents saw or heard either a deepfake video, a fake audio, a SNL-skit like exaggeration, or a text headline. Deepfake stimuli featured Senator Elizabeth Warren in several scenarios. In them, Warren:
Calls Biden a pedophile
Calls Trump a pedophile
Revives an old controversy about identifying with indigenous people.
Creates an unexpected controversy about LGBTQ lifestyle
Goes back on a political position that eliminating student loan debt for anyone is fair or realistic
In all, just under half (47%) of respondents believed the deepfake video was real. However, the deepfake scored no better or worse compared to the audio or text false messaging.
Further, the research delved into respondent characteristics (e.g., gender, income, political party and more) to see if any are predictors of susceptibility to deepfakes. The results showed no significant differential between deepfakes vs. false text vs. audio misinformation. However, selective acceptance of information based on previous beliefs may influence an individual’s response to deepfakes.
Test 2
The second test alerted respondents to look for misinformation. Participants were asked to identify if a video was true or fake. In all, 55% of the videos were identified correctly. Interestingly, political orientation did have an impact here. Both Republicans and Democrats underestimated the authenticity of real videos if it went against their party or candidate. They were much more likely to call a real video fake if it made their political leader or party look bad.
Seeing is not necessarily believing these days. Based on these findings, deepfakes do not facilitate the dissemination of misinformation more than false texts or audio content. However, like all misinformation, deepfakes are dangerous to democracy and media trust as a whole. The best way to combat misinformation and deepfakes is through education. Informed digital citizens are the best defense.
In an attempt to act responsibly, social platforms now flag content that is certain to be false. However, flagging disputed content has some unintended consequences. False headlines that aren’t flagged are often thought to be true. In fact, according to new research, The Implied Truth Effect, conducted by Gordon Pennycook, Adam Bear, Evan T. Collins, and David G. Ran, false headlines that fail to get tagged are viewed as more accurate. Thus, the research appropriately questions whether the policy of using warning tags to fight misinformation is effective.
False news headlines with flagging
This research includes two studies. In the first, the
control group was shown both true and false news headlines without any warning
labels. The test group was shown both true and false headlines; the false news
headlines included warning labels. Participants were asked how accurate the
headlines were and if they would consider sharing the story on social media (such
as Facebook or Twitter).
The first study confirms that content with warning labels
decreased the belief in items that are flagged (the Warning Effect) but
increases belief in items that are untagged (the Implied Truth Effect). In
other words, headlines that were not flagged in the test group, were rated as
more accurate, by at least one-third, than those in the control group
True news headlines with flagging
The second test included a control group and two test groups.
Participants were presented with news headlines and asked whether they would
share on social media. They were told that 75% of the headlines had been
fact-checked by Snopes.com.
The control group was shown true and false news headlines
without any labeling. The first test group was shown false news headlines, half
with a “FALSE” stamp and the other half without any stamps. The second test
group was shown false news headlines with a “FALSE” stamp and true news
headlines with a “TRUE” stamp.
The findings show that participants in the first test group were
less likely to consider sharing false headlines tagged with a warning compared
to false headlines in the control group. Further, participants in the second
test group were more likely to consider sharing true headlines tagged with a
verification compared to true headlines in the control group.
W = Warning, W+V = Warning + Verification
This research identifies the consequence of attaching warnings to some inaccurate headlines but not all. It’s safe to assume that a large percentage of false headlines will continue to appear on social platforms and remain untagged. However, it is important to note that it may be even more valuable to tag true headlines.
Labeling truthful and verified headlines helps consumer identify
what is true and suggests that everything outside this stamp is potentially false.
While this research was is experimental in its design, it’s important to take
next steps to explore the impact of full stories. Future work should
investigate the impact of warnings on the users’ likelihood of clicking through
to read the full articles, and the impact on sharing after reading the article.
At
the dawn of the new decade of 2020, DCN members gathered at the Mandarin
Oriental Miami January 16 and 17 to network, discuss victories and challenges
as media companies evolve, and explore industry predictions.
The
new decade calls for a perfect ‘20/20’ vision, said Jason Kint, CEO, Digital
Content Next as he kicked off the closed-door, off-the-record gathering. That
encompasses continued focus on audience desires, pushback against the myth that
all content has to be free, and the elevation of trust and transparency in an
era marked by ‘fake news’.
The European
Union’s recently enacted copyright law is a win for the industry, with similar
discussions expected this year in the U.S, noted Kint. Federal and state
investigations as well as emerging regulations are all good signals toward protecting
consumer privacy, regulating data use and anti-trust concerns, notes Kint.
We
can also expect a steady rise in content investments. UBS estimates that in
2020, a combined 16 media firms will spend $100 billion to produce content.
More than $35 billion will allocated on streaming video content, as new players
such as Disney Plus and NBC’s Peacock emerge.
“I’m
feeling really good this year about where things are headed,” said Kint.
Platforms
and policy
Jim Bankoff, CEO, Vox Media said he valued being at the DCN Summit. He described it as a place where premium publishers come together to “find ways to partner and to check our healthy, competitive impulses … and figure out ways to work together” in the wake of ceding ground to third party big tech platform and ad network “that have proven time and again not to have our best interests in mind.”
Investigative journalist Carole Cadwalladr
Investigative
journalist Carole Cadwalladr, who freelances for the Guardian and Observer,
captivated the audience by recounting her experiences unearthing the activities
of Cambridge Analytica and Facebook. She was nominated for a Pulitzer Prize for
her work, which sparked international investigations as well as inspiring the
Netflix documentary, ‘The Great Hack’.
“This
was my introduction to this world of creepy disinformation, but also complete
reluctance from the platforms to even acknowledge the problem, let alone deal
with it,” she noted. She was instead subjected to legal pushback from Google
and Facebook as well as online bullying.
She
also called for media companies to not compete against each other. Instead, she
encouraged those in the room to join together to “compete against lies and
falsehoods. We’ve seen it in Britain and you’re next,” said Cadwalladr.
Monopolies
Scott
Galloway, professor, NYU Stern School of Business, said
he believes that the big tech companies on the antitrust radar should be
broken up. Monopolies kill economic growth and are a “key step to tyranny,” he
contended, adding a co-opted government can’t serve as a dominating force for
protection.
Galloway
pointed out that efforts to regulate the behavior of big tech fines have been
largely ineffectual. To date, the fines haven’t been punitive enough to
dissuade the big tech companies to modify behavior, he said. He also criticized
the federal government for being slow to act.
Money
matters
Monetization and concerns about subscription fatigue were recurring themes at the summit. Yet DCN research shows that younger audiences in particular appreciate the value of a subscription and finds that there is still consumer appetite for subscription products.
Jonah Peretti, founder and CEO of Buzzfeed
Jonah
Peretti, founder and CEO, Buzzfeed noted that over the course of a few short
years, the company has begun to generate significant revenue from Facebook,
Google, Amazon, and Netflix from licensing.
“I
don’t think Facebook or Google wants to buy news companies,” said Peretti. Of
the platforms movement toward paying for content, he said that “They get the
benefit of sharing some of the costs of the production of that content. News is
a great way to direct repeat visitors and to build trust in the platform to
avoid some of the problems of misinformation.”
Media
shifts
Kevin
Turpin II, president, National Journal, noted his longstanding publication adapted
to the changing media landscape by transforming itself from a media company to government
research and consulting services company for which subscribers are willing to
pay premium prices.
Jim
VandeHei, co-founder and CEO, Axios; Executive Producer, AXIOS on HBO said, “you
have to deliver content in a way that I would deliver in a conversation with
you over a drink, like what is new.” However, to create value, “Tell me why it
matters. Give me some context. Give me the power to go deeper.”
For
Complex, the path to success hasn’t been simple. Rich Antoniello, CEO and founder,
Complex Networks said, “we call ourselves a brand that happens to monetize
through media.” He said his company shifted from an ad-dependent model in 2016,
ahead of the curve.
One
example is the wild success of its “Hot Ones” program. It features10 questions
of its celebrity guests that get progressively more personal along with the consumption
of hot sauce that gets progressively hotter. And the business model is based
not on advertising, but on the sales of high-margin hot sauce.
Antoniello
also outlined the success of ComplexCon, the company’s flagship event, which connects
cultural icons with fans who spend $100 to $700 for VIP tickets, with hundreds
of thousands sold. Fans also snap up merchandise from Complex and its app-based
vendors such as Nike and Adidas.
The
power of fandom arose again when Howard Mittman, CEO, Bleacher Report spoke of
how his company’s app and successful franchises attract sports fans. He
described how individual athletes hold more sway in their fandom habits than sports
franchises.
Nearly
10 million fans have signed up for alerts and the app accounts for half of the
company’s user engagement. Bleacher Report’s focus is not on breaking sports
news, but creating engagement on its own platforms, according to Mittman.
Her
story
Media
continues to go through cultural shifts toward diversity both in company
staffing and in targeting readership such as women. “Women are generally not
seeing themselves in media and advertising to the extent that they should be,”
said Catherine Levene, president, chief digital officer, Meredith National
Media Group.
“We
have been the first to support #SeeHer, a national organization committed to
accurate representation of women in media and advertising,” she said. She added
that’s not only good for supporting women, but also for the bottom line. Women
who see themselves in media and advertising are 45% more likely to recommend a product
to a friend and purchase it, said Levene.
Elise Loehnen, chief content officer, Goop
Despite
the controversy it has attracted by those who question the veracity of its
science, Gwyneth Paltrow’s Goop brand is growing, noted Elise Loehnen, chief
content officer. The platform embraces several media forms and covers topics
from relationships to health, including alternative therapies. She said that
the controversy has been good for keeping the brand at the forefront of popular
discussions.
“We’re
tired of being talked down to,” said Loehnen. “We’re a strong female brand
undisturbed by the chaos.”
Adapt
or die
Rishad
Tobaccowala, chief growth officer, Publicis Group,noted that the only
way to get ahead as a legacy company is to “kill your core. You have to rethink
your entire business.”
Levene from Meredith believes that the mobile world and 5G will create an even greater market for video. And, with 50% of searches conducted on the more than 200 million voice-enabled devices in U.S. homes, opportunities and challenges will arise.
Google’s
action to purge third-party cookies against the backdrop of GDPR and CCPA will
impact the entire digital ecosystem, Levene noted.
“Data
is going to be the currency of the future. Those who have it at scale and the
ability to drive a lot of insights from it are going to win,” she added.
Kindness
matters
In a social media environment that is being blamed for everything from decreasing personal contact to radicalizing disaffected youth and intensifying suicide rates among girls, Tatyana Mamut, head of product, Nextdoor, made the case that her platform is creating connections on a micro-level in a neighborhood at a time when people hardly know their neighbors
“I
believe that kindness is the next big thing in tech,” she added.
Palo
Alto journalism educator Esther Wojcicki made the case that helicopter
parenting has impacted the workforce and its ability to embrace risk and
innovation. She calls for parenting – and management – to embrace trust,
respect, independence, collaboration and kindness. She also promotes the idea
that every student should take a journalism course to build media literacy skills.
The
future will be fraught with change. And as Tobaccowala pointed out, “human
beings know how difficult change is.” But to survive, media companies must
continue to evolve.
“We
have the power to shape minds and hearts, to fill the world with laughter and
tears to inform the truth,” said Kint. “Here’s to 2020 bringing the roar of the
crowd as we focus on what matters most: the audiences we serve.”
An alarming number of consumers don’t trust the media. Since trust hit its all time low in 2016, the industry has been hard at work restoring this critical factor. The media industry and social platforms now employ a wide range of approaches in order to address the proliferation of inaccurate and misleading stories. Some media brands have undertaken marketing and educational efforts to help make the connection between brand and the quality of information more explicit. And labeling has been used as a means to help consumers quickly identify the source of, and type of, information they are viewing.
This last approach — labeling — takes a classic print strategy and brings it into the digital medium. A new study from The Center for Media Engagement (CME) set out to evaluate the effectiveness of labeling stories. Unfortunately, the primary takeaway is that labeling alone does not improve consumer trust in the information before them. In fact, most of them don’t even notice labels or recall them accurately after reading an article.
Labels alone will not
restore trust
However, this is not
to suggest that labeling should be abandoned altogether. Upon deeper
inspection, the research found that some labels work better than others. The research
also suggests that, when conceived of as explicit and even educational, labels
may be effective as part of an overall trust-building strategy.
Does the in-story
explainer label work better than the above-story label?
Key findings from the research:
Labeling stories did
not affect trust.
Nearly half of the
participants did not notice whether the story was labeled.
Those who reported
seeing a label were not particularly accurate in recalling the type of label.
Of the two labels, recall was better for the in-story explainer label
What label?
Clearly, the research
demonstrates that people glaze over most story labels (i.e. news, analysis,
opinion, sponsored) if they notice them at all. Overall, 45% reported that they
did not notice whether an article was labeled or not and that percentage did
not vary depending on whether the article actually was labeled.
More concerning was
the finding that, when asked, most people believed that the article was labeled
news. This is a potentially problematic default assumption given efforts to use
labels to prevent the spread of disinformation and to help consumers
distinguish opinion, commentary, and satire from hard news and analysis.
The research analyzed
whether the effect of the story labels on the ability to recall the label
varied based on participants’ backgrounds, including their age, race,
ethnicity, education, income, gender, political ideology, and political
partisanship. It is interesting to note that only one variable seemed to
matter: age. The younger the participant, the more likely they were to recall
the label correctly when the story was labeled news or opinion.
Explainer labels,
explained
CME also compared the
traditional above-story label to an in-story explainer label and no label at
all. The in-story explainer label provided definitions of each label based in
part on those proposed by the Trust
Project.
Overall, the study
found that in-story explainer labels increased the likelihood that people would
recall the correct label compared to those who did not see the label and those
who saw above-story labels. However, they still found that many people failed
to recall whether the story they read was labeled or not.
Sixty-three percent of
those who saw an article without a label said they did not recall whether there
was a label (25% correctly recalled that it was not labeled). Fifty- eight
percent of those who saw an article with an above-story label could not recall
whether the article was labeled (24% correctly recalled what the article was
labeled). On a more encouraging note, only 24% of those who saw an article
with an in-story explainer label failed to recall whether the article was
labeled, and 66% correctly recalled the article label.
More work to be done
Unfortunately, regardless of label type, the use of labels alone did not improve consumers’ view of the information’s trustworthiness. Past research from CME suggests that a combination of strategies to signal trust – such as story labels, author biographies, and descriptions of how the story was reported, can increase trust.
Given reader’s digital
consumption habits, it is significant to reveal the low recall for labels,
particularly those placed above the story. Other efforts, such as
describing how a story was reported, in conjunction with the finding that
explainer labels are somewhat more effective suggest that transparency and
consumer education will be critical in restoring trust in digital
information.
Most academic, media and political analysts forecast the likelihood of disinformation playing a large role in the upcoming 2020 presidential election. The NYU Stern Center for Business and Human Rights echoes this projection in its new report, Disinformation and the 2020 Election: How the Social Media Industry Should Prepare. In fact, the analysis predicts that more disinformation will be generated state-side than the volume generated by foreign entities.
What to expect from
disinformation
Deepfakes increase in volume. Deepfakes are easier to produce now due to advancements in deep-learning and editing systems. A deep-learning system produces a persuasive fake video by studying photographs and videos of a target person and merging them with images of an actor speaking and behaving in the same manner as the target. Once a preliminary fake is produced, a method known generative adversarial networks (GANs) makes it more believable. The GANs process detects any inaccuracies and corrects them. After a few rounds, the new fake video is complete and ready for amplification.
Disinformation spreads to the political left. While domestic disinformation comes most often from the political right, the left is also engaging in its creation and spread in social media.
Misled American staged events. Americans are now being recruited by Russian organizations to stage real-life activities to spread of disinformation. From deceptive IRA social media personas to anti-Muslim and pro-Muslim demonstrations, these events are promoted online to American followers to attend and draw media coverage.
Instagram, owned by Facebook, will be used more to spread disinformation. Image and video services are ideal for spreading disinformation via memes; photos combined with short, punchy text, and video clips.
WhatsApp, also owned by Facebook, will be used to amplifying disinformation. WhatsApp was used to send false content to large populations in the elections in Brazil and India. It could be a very active force in the U.S. 2020 presidential elections.
Increased international activity. Not only is Russia involved in the creation and spread of falsehoods but Iran and China are also suppliers of disinformation.
Digital voter suppression continues as a threat in 2020. Election manipulation to continue in 2020. According to the University of Wisconsin, users tried to suppress voter turn-out in 2018 creating Twitter campaigns. One post tried to tell Trump opponents incorrect voting day information. Another tried to intimidate liberal voters by saying that NRA members and Republicans are bringing their guns to the polls.
Responding to disinformation
Social media companies put a few new measures in place since
2016 and 2018. They are now communicating more with each other, the government,
and outside experts in an effort to address disinformation. Regardless, more
has to be done to prepare for 2020. The NYU Stern Center for Business and Human
Rights suggests the following recommendations for social platforms:
Detect and remove deepfakes. Improve
efficiency in removing deepfakes before they do their damage.
Remove content that is provably false.
Purge content that is definitively untrue.
Hire a content overseer. Hire a senior
official to oversee the process of guarding against disinformation.
Make changes at Instagram. Act
assertively to protect users from disinformation. Instagram
doesn’t remove or down-rank a user’s main feed if disinformation is found. While
the service does make it harder for new users to access the false content, it’s
not a forceful enough action to stop the problem.
Limit the reach of WhatsApp. WhatsApp now
limits the reach of a message to 1,280 users (5 chat groups x 256 members in
each) versus the previous maximum reach of 66,536 users (265 chat groups x 256
members in each).
Defend against for-profit
disinformation.
Social platforms need to pay close attention to false
content distributed by corporations, consultants, and public relations firms.
Many companies specializing in clickbait manage successful businesses alluring naïve
and intrigued consumers attracted to conspiracy theories and fake items.
Support legislation on political ads
and voter suppression.
Push the Senate to approve the
Honest Ads Act, to include political disclosure standards to
online ads.
Improve industry-wide collaboration. Form
a permanent inter-company task force devoted to fighting disinformation.
Teach social media literacy to the public. Educate users about
questionable content and what to do if they come across it.
NYU cites findings from The Oxford Internet Institute and from four universities, Princeton, University of Exeter, Washington University and University of Michigan. This body of research shows a decrease in disinformation between 2016 and 2018. However, with the 2020 being a presidential election year, all anticipate intensified interference by both foreign and domestic U.S. actors. Continued efforts are needed to combat disinformation and protect the electoral process.
The business of publishing disinformation, inaccurate information spread purposefully and/or maliciously, is more profitable than ever according to a new study from the Global Disinformation Index (GDI). Analyzing website traffic and audience information from 20,000 domains it suspected of disinformation, GDI estimates the sites generated at least $235 million in ad revenue. GDI is a nonprofit that evaluates and rates websites’ risk of spreading disinformation.
A fertile environment
The existence of disinformation predated the internet. However, social platforms offer a new level of amplification. Social media coupled with a programmatic marketplace provided the perfect environment for malevolent actors looking for reach, a target audience, and revenue.
Ad-tech companies’ opaque practices make the environment even more attractive. Some connect buyers and sellers while others collect, aggregate, package and sell data. The result is a black box of operating systems. These shifts in the digital ecosystem offer a golden opportunity for marketers of fraud and disinformation.
Focused approach
Disinformation actors rely on each another to amplify their messages. Interfering with one actor could potentially make it more difficult for the others to spread their disinformation. However, despite their inter-reliance, disinformation actors each have a distinct focus:
State actors include governments as well as state-linked actors to spread inaccurate information and promote government propaganda. They are centralized actors using digital virality to amplify their message.
Private influence operators are for-hire companies (e.g. Cambridge Analytica) that run commercial marketing and public relations campaigns that aim to disinform the public. They use targeted campaigns to identify a specific psychological, behavioral or politically affiliated audience to amplify their message. Their misleading and false content sites look professional. These ad-supported domains mimic traditional journalism.
Grassroots trolls are individuals or groups that band together for a specific issue or cause. Their content and activities often focus on hate speech or try to push a false narrative. Their messaging often starts out on forums like 4chan or 8chan, move to other intermediate platforms like Reddit and then finally into mainstream media.
Pure rent-seekers are all about clickbait. They churn out sensational disinformation to drive visitors (and bots) to click on their site in order to collect revenue.
It’s not surprising that today’s digital marketplace offers an effective delivery system for disinformation actors. Unfortunately, the internet is filled with disinformation that is rapidly amplified via social media. And it is human nature to find drama attractive. Disinformation is loud content that demands our attention. It also claims advertising dollars in an efficient and expedited programmatic manner. As an industry, we need to de-incentivize disinformation actors by removing financial and amplification motivators.
Disinformation comes in all shapes and sizes. Whether it takes
the form of a text-based article, a meme, a video or photo – they are designed
to go viral across message boards, websites and social platforms like Facebook,
Twitter and YouTube. And, its polluting the internet. In fact, an Oxford
Internet Institute study
found that in the 30 days leading up to the 2018 U.S. midterm elections, a full
25% of Facebook and Twitter shares contained misleading and deceptive information
claiming to be real news. Addressing concerns about domestic disinformation, Paul
Barrett, of the NYU Stern Center for Business and Human Rights, identified the
steps social platforms need to take to stop the problem in a new report, Tackling
Domestic Disinformation: What the Social Media Companies Need to Do.
Disinformation
epidemic
In the Report Barrett cites a MIT study that analyzed every English-language news story distributed on Twitter over an 11-year period and then verified the content of each story as either true or false. The study found that, on average, false news is 70//5 more likely to be retweeted than true news. Where are all these falsehoods coming from? A Knight Foundation study found that 65% of fake and conspiracy news links on Twitter could be traced back to just 10 large disinformation websites (e.g. Infowars).
First Amendment
Domestic disinformation is a constant in today’s digital experience.
While there are many who call for its removal, others believe that it’s
difficult to differentiate it from ordinary political communication protected
by the First Amendment. Importantly, Barrett does not suggest that the government
determine what content should be removed from social media. He believes social
platforms can make better choices determining if content is accurate, and also how
they promote and rank it.
Practices in place
Social platforms use machine learning to improve their
ability to identify false stories, photographs, and videos. In addition, while Facebook
previously flagged content to warn readers that it was potentially false, they
now offer “Related Articles,” a feature that provides factually-reliable
context about misleading stories. YouTube offers a similar program. When a user
searches for topics that YouTube identifies as “subject to misinformation,” they
preface video results with a link to information from reliable third parties. Even
with these efforts, disinformation remains available on these platforms for
anyone to view and share. Social platforms’ current practices are not enough in
their fight to reduce disinformation.
Barrett’s recommendations:
Remove
false content. Content that is proven to be untrue should be removed from
social media sites, not just demoted or annotated.
Clarify
the principles for content removal. Offer insight and transparency into
what constitutes facts and rational argument versus the manipulation of
information for disinformation purposes.
Hire a
senior executive who has company-wide responsibility for combating false
information.
Establish
a more robust appeals process. Offer an opportunity for an appeal to a
person or people not involved in the initial content removal decision.
Step up
efforts to purge bot networks. Increase efforts to eliminate automated
accounts that imitate human behavior online.
Retool
algorithms to reduce monetization of disinformation. Doing so will diminish
the incentive and therefore the amplification of fake news.
Provide
more data for academic research. The platforms have an ethical and social
responsibility to provide data they possess to facilitate studies and tracking of
disinformation.
Increase
industry-wide cooperation. Establish a data exchange and offer best
practices across platforms to ensure common challenges are addressed.
Support initiatives
for digital media literacy. Teaching people to identify digital literacy and
how to be more discriminating of online content should remain a priority.
Sponsor
more fact-checking and explore new approaches to authenticate news content. Continue
fact checking efforts, a crucial first step to distinguish between truth and
falsehoods.
Support
narrow, targeted government regulation. Identify specific content
regulations similar to the degree of disclosure for online political
advertising currently required for traditional broadcast media.
Barnett concludes that “neither the First Amendment nor international principles protect the lies on social media.” It’s essential for social platforms to step up their role in self-governing to ensure disinformation is not monetized or worse, used to manipulate people and trigger violence. Importantly, humans must remain in control of platforms, overseeing the impact of AI in all its forms. Scrutiny and transparency are key in uniting efforts to dismantle the prongs of disinformation.