/ An inside look at the business of digital content
How publishers fight misinformation in ads
November 1, 2023 | By Tobias Silber, CBO – GeoEdge@tobiassilberWartime and elections pose the ultimate test for publishers’ programmatic advertising safeguards. Unfortunately, these measures have faltered in Q4. Since October 10th, GeoEdge’s security research has detected gruesome ads across U.S. publisher sites, revealing publishers’ flaws in accurately categorizing and thwarting violent ads.
In today’s sensitive international media sphere, publishers must confront disinformation by prioritizing responsible reporting and implementing resilient ad quality policies. Advertising undeniably influences media trust, with 64% of consumers claiming that seeing bad ads undermine their trust in all media organizations, extending beyond the site where the ad appears. By identifying and preventing offensive ads, ad-supported media can adeptly maximize ad revenue while maintaining audience trust.
The breakdown of publisher brand suitability
Since the start of the Israeli-Hamas conflict in early October, programmatic channels have been inundated with violence, graphic imagery, and fringe support for terrorist activity. In an effort to engage audiences and garner support, advertisers have displayed ads featuring incitement of violence, weaponry, and gruesome war-related content.
These ads are intentionally designed to elicit viewers’ shock, fear, and horror, which results in significant effects on publisher engagement metrics. GeoEdge research revealed that 73% of consumers would not recommend sites with offensive ads to others, and 56% of consumers leave sites or apps due to unwanted ads. These ads take an emotional and physical toll on audiences, leading to reduced session times, increased churn rates, and a negative association with the publishers’ brand.
Upholding audience safety amid crisis
Publishers face the critical task of deciding whether specific advertisers and sensitive, hot-button issues can be allowed to run ads on their sites, all while thwarting malicious actors.
“Media organizations frequently turn to upstream partners, ad exchanges, and SSPs for ad filtering, yet the presence of explicitly violent ads on publishers’ sites underscores their failures,” stated Amnon Siev, CEO at GeoEdge. GeoEdge’s security research team revealed that tech giants, including Google’s Ad Manager, fail to prevent ads that promote terror and clickbait, disseminating misinformation and spreading graphic content.
Publishers can firmly grab the reins by:
- Establishing a proactive policy for monitoring harmful messaging and provocative visuals in both ads and landing pages.
- Identifying and blocking advertisers that seek to provoke fear and shock
- Empowering readers to flag offensive ads directly from the ad slot. This allows the ads to be quickly reviewed, and any problematic ads can be referred to the ad ops team for immediate action.
Clickbait, deepfakes, and misleading advertising
As the 2024 election season approaches, ad spend is expected to top $10 billion in order to sway American voters. However, incorporating election ads into a revenue strategy isn’t cut and dry. More than half of consumers (67%) believe that the primary responsibility for keeping bad ads at bay lies with the website owner. Media executives must grapple with misinformation and divisive election ads; otherwise, they risk audience alienation, diminished brand value, and loss of quality advertisers.
To maintain audience trust, publishers must keep a close eye on these four categories: candidate-focused ads, attack ads, fundraising ads, and issue-based ads. It is critical to ensure that they align with audience values and truth in advertising.
Navigating election disinformation
Generative AI and deepfake technology have already been used to try to sway American voters. Responsible publishers must proactively detect and prevent their spread.
Publishers’ ability to maintain oversight of their programmatic inventory is essential. When serving political content on your site, there are several approaches media executives can take in order to combat political disinformation:
- Red light the entire 2024 election category, with exceptions to specific sites and advertisers
- Automatically approve all election-related ads, and then eliminate misaligned ads/advertisers
- Give users the ability to flag potentially problematic content themselves. User reports enable ad ops teams to take swift action based on user feedback and ensure a safer, more truthful environment.
The turning point
Ad supported media now faces a defining moment to reevaluate their ad quality policies and bring truth in advertising to their audience. Those who seize the opportunity are taking a significant step towards maintaining the trust of their audiences. Those who fail to do so not only risk losing the trust of their audiences, they risk losing them altogether.