Login
Login is restricted to DCN Publisher Members. If you are a DCN Member and don't have an account, register here.

Digital Content Next logo

Menu

InContext / An inside look at the business of digital content

Ad effectiveness measurement doesn’t scale, and it’s failing advertisers

October 23, 2019 | By Jamie Auslander, SVP Research & Analytics—true[X]@trueX

Market research is big business: The top 50 research firms generated an estimated $11 billion dollars in 2018, with Nielsen and Kantar/MillwardBrown at the very top of the list. But it’s even bigger than that. Adding in expenditures to the other third-party providers that measure fraud, safety, viewability, and outcomes, and any national “ad measurement expenditure” figure increases by hundreds of millions of dollars, if not more.

Given the US spend on measurement, not to mention how many digital innovations have taken root over the past two decades, it is disappointing to know that most branding campaigns are still not directly measured for ad effectiveness and impact in real time. This is not to say that monitoring for fraud and brand safety, evaluating viewability, tracking impressions, and counting clicks aren’t valuable efforts. They are. The issue is that while all brand advertising campaigns are measured for their quantitative impression delivery, only a fraction are measured for their effectiveness at changing consumers’ hearts and minds. The result? Measurement that doesn’t scale.

Rethink the panel

Here’s a modest proposal: We need to critically reconsider our industry-wide reliance on panel-based sampling as a means of measuring brand lift. Conventional lift studies that use panels are expensive, typically do not collect enough data. Thus, they aren’t informative enough early on in the campaign flight to be highly actionable. With access to real-time technologies, we can leverage considerably better alternatives.

Let’s look at the cost and examine whether we’re really getting our money’s worth. It is unheard of to purchase a completed survey from a sample vendor for less than $2 per interview. And we’d need to collect 500 control and 1000 exposed survey observations, which enables researchers to address different types of error while still empowering them to slice and dice the data. So, this would cost ~$3,000 per study. The reality, however, is that most brand lift studies cost magnitudes more. And yet responses are still very slow to field, and collection levels typically hover at 100 control and 100 exposed.

Some issues

Collection cost, yield, and timeliness aren’t the only barriers to progressing sample-based effectiveness measurement. Actually, representative sampling and quality can be even thornier issues. Surveys have a big problem with straightliners (people who select the same response option at each question), speeders (who take the survey too fast to be credible), and cheaters (who do not respond truthfully). You’ve probably done this yourself, but you can’t be faulted, as “good” panel-based surveys take ~8 minutes to complete. However, most sample providers don’t even blink when asked to fulfill 20-minute studies or longer. Considering the challenges of holding consumer attention these days, if you were in the 7th minute of a survey about the Gap, what type of quality response would you likely to provide?   

Quality control concerns are bigger still when you ask the question: Who are these “panelists”? Do you personally belong to a panel? Did you click a banner ad to “get paid mad cash to take online surveys”?

The reality is that even though the “best” panels market their pools as containing millions of Americans, there is very little transparency around these and who actually participates in them. Given the cash approach to attracting panelists, it feels fair to question their practices and how representative these panels are of the populations they aim to mirror. How panels engage, recruit and incentivize respondents matters, as it is highly likely to affect data quality and the resulting inferences.

Measurement matters

When academics contribute to the scholarly literature around ad effectiveness measurement, convenience sampling from panels is rarely, if ever, portrayed as a best practice. A 2006 comScore study in the US concluded that fewer than 1% of panelists in the 10 largest panels were responsible for 34% of completed questionnaires.

Things haven’t necessarily gotten better since. Most sample providers today cannot use a single panel to field a study. That’s because the findings aren’t sufficient. Instead, they stitch together a network of suppliers to source respondents. This introduces serious sampling challenges.

Moreover, in the last few years sample suppliers have energetically embraced programmatic technology. This means that while sourcing sample is more efficiently connected to demand than ever, the promise of randomness is critically upended by the economic benefit of finding the cheapest possible respondent to answer your brand study.

Explore the alternatives

In short, there are many issues with panel-based ad effectiveness measurement that should compel us to explore better alternatives. Thanks to digital technology, these alternatives exist. While it might sound surprising, the next generation of brand lift measurement should consider returning to the tradition of random sampling. It is time to reinvest in sampling technology that enables respondents to be drawn from audiences that are truly representative of the populations exposed to brand advertising campaigns and of those who could have been exposed (control holdouts).

Imagine if large random samples could be drawn quickly from the very same sites, channels, devices, and platforms where advertising occurs. So, if a Toyota campaign runs while streaming an episode of the Bachelor, the surveys used to measure and optimize this campaign also run during episodes of the Bachelor.

Here are some ideas to make measurement better:

  • Make survey placements match the placements where the ad campaign is flighted.
  • Surveys should be served according to the same logic and targeting that steers towards a specific addressable campaign audience.
  • Surveying should be completed with one or two questions by dual-purposing the very same ad and targeting tech that underlies how we serve ad campaigns today.

Just imagine how much better measurement would be if survey serving spoke to ad serving in real time.

Yes, we can advertise better. We can also measure better.

Liked this article?

Subscribe to the InContext newsletter to get insights like this delivered to your inbox every week.