Login is restricted to DCN Publisher Members. If you are a DCN Member and don't have an account, register here.

Digital Content Next logo


InContext / An inside look at the business of digital content

What publishers need to measure for healthy header bidding

April 6, 2021 | By Alexandra Balashoyu, Head of Publisher Operations – The MediaGrid@IPONWEB

Header bidding has become an essential component of most publishers’ ad monetization strategies, enabling better inventory fill rates and higher revenue. It allows publishers to receive bids from multiple trading partners at the same time. Contrast this to the traditional ‘waterfall’ method of trading, in which inventory is passed to ad networks sequentially.

Five critical metrics

Header bidding is a success story because it improves on what came before. But that doesn’t mean it is optimized to drive the best possible results for each publisher. The following five metrics can help publishers evaluate the health of their header setup. They will also provide insight into how they can use this information to further increase revenues.

1. Page Load Speed (the time it takes to fully display the content on a page)

Header integrations can be client-side or server-side. (Very simply, client-side header bidding sees all the auction-related activity takes place on the user’s browser, while in server-side bidding this happens on a standalone ‘auction’ server.)

As a general rule, client-side increases the audience match rate. (This, in turn, increases CPMs and monetization potential). But it also increases page latency. Server side reduces latency. However, this is at the expense of the match rate. When it comes to selecting which ad stack to go live with, publishers are forced to choose between prioritizing revenue and maintaining the user experience.

For most publishers, a combined approach that leverages both client and server-side setups is optimal, but it needs to be fine-tuned regularly. Ideally a publisher will have an A/B testing framework that moves client-side partners to server-side one-by-one, testing the efficacy in both locations (client versus server). By measuring for revenue, CPMs, page performance, viewability, and bidder timeouts between the test and control groups for the integration locations, the publisher can find the optimal balance to ensure maximum revenue.

For example, an SSP might have a tendency to time out in a particular region when it is called directly from the browser. A successful combined approach might see the publisher permanently move this SSP from client to server side in this specific geolocation. A/B testing, which can be carried out by the publisher or a partner, will show revenue remaining the same and the page latency being reduced. Using this technique with MediaGrid partners, we’ve seen load times reduced by up to 50% and viewability increase by more than 10%.

2. Timeout Rate (how often bidders fail to return ad auction bids within the publisher time limit)

When a bidder fails to return a bid within the timeout limit specified by the publisher, the bid is said to have “timed out.” The timeout rate indicates how often a bidder fails to return a valid bid response within the required time period (i.e. while the page is waiting for it) compared to how often it achieves this. When viewed alongside the bid rate and win rate, timeout rate can help publishers understand the opportunity cost of retaining a particular bidder. Bidders with a consistently high timeout rate harm the site’s user experience and the publisher’s revenue-generating ability.

Historically, publishers grouped all ads on a page and sent them to demand partners in a single request, with one universal timeout. While this reduces page latency, it also increases the likelihood of timeouts and may also reduce the fill rate and user experience.

A better way to manage timeout rates is to group ads based on page position (above the fold, below the fold for example). Then, send these in separate requests to demand partners, with different timeout windows. By tracking timeouts, a publisher can see the time frames in which partners respond. Those with shorter response times can be grouped in ad calls for above the fold inventory. Partners with longer response times can be placed in a second ad group lower down on the page.

The slowest partner may still be a strong revenue generator for below the fold slots, even if they do not bring anything incremental to the table for above the fold inventory. Using this approach, rather than a single request one, we’ve seen 10% higher CPMs and a 20% increase in bid rates.

3. Fill Rate (impressions served versus requests received)

Google Ad Manager (GAM) has traditionally prioritized direct sold campaigns. This means they will be served before line items that have been assigned a lower priority (even if the lower priority items have higher CPMs). These lower priority programmatic ads will get fewer opportunities to compete in auctions. This can adversely affect a publisher’s fill rate, and therefore revenue.

Publishers can achieve higher fill rates within their header bidding integrations by rethinking how they set line item priorities within GAM. This can be particularly important during periods when media buyers are looking to spend budgets (at the end of a quarter or the financial year, for example).

Traditionally, line items are set up in descending priority tiers: Sponsorship, Standard, Network, Price Priority, and House as the lowest with header bidding scoped to run only as Price Priority line items. However, this setup is far from ideal since high CPM header bids are unable to compete with direct sold campaigns.

To correct this, publishers should start by identifying the CPM threshold where the header bidding fill rate of the line item stops growing proportionally to the price tier. (Note, this is data that can be obtained from GAM reports or the SSP’s bid density reports, which include both impressions and bids).

Using this value, publishers can change a line item’s GAM priority tier from Price Priority (which is where header bidding lines are historically placed) to Sponsorship or Standard. This will increase its priority. It will, therefore, increase opportunities to complete in auctions. Based on analysis carried out for our publishers, the CPM threshold tends to be between $15 and $20. Publishers can create higher priority line items for open exchange bids above the CPM threshold value ($20 for example) and let them compete with direct sold inventory.

Uplift can be measured by tracking CPM and spend per line item priority type (Sponsorship, Standard, Network, etc) on a daily/weekly basis. We’ve doubled fill rates when header bidding lines greater than $20 CPM (or the respective monetary value for that publisher) are set at Sponsorship / top Standard priority.

4. VAST Impression Rate (video ad impressions versus bids)

The IAB VAST specification aims to ensure that video ads run in the way a publisher wants, regardless of which website and device they are being shown on. But the high number of technical integrations increases the likelihood of a VAST error in the time between the advertiser winning the auction and the ad being served. Our experience shows that, across the board, 13% of video supply results in errors and no revenue.

Monitoring the VAST impression rate lets publishers know whether video ads are playing. Then, it helps them to mitigate errors if they are not. Setting up a VAST waterfall combines video ads sequentially to ensure an ad is always shown (by having a fall-back VAST ad unit pre-prepared in case a failure results in the first unit not running). Publishers hesitant about investing in VAST waterfall development work may want to encourage their SSPs to support this technology as it also positively affects the video impression rates. MediaGrid partners utilizing this technique have seen 14% increases in VAST impression rates.

5. Downstream Match Rate (match rate between DSPs and requests from the publisher)

With no direct end-user relationships (and therefore no first-party data), SSPs and DSPs rely on cookie matching to “sync” the users that are common to all trading partners. The cookie sync determines a match rate, i.e. the percentage of shared known users. This usually averages 50-80% between each downstream participant.  With higher match rates, publishers can command more advertising revenue.

Cookie syncs are often performed “downstream” in the media trading chain (publishers to SSPs, SSPs to DSPs, DSPs to brands). Unfortunately, the reduction in match rate between downstream trading partners (that are not directly connected to the publisher) is compounded at each step. This can result in a loss of revenue for the publisher.

For example, the typical match rate that a publisher has with a connected SSP is 70%. If that SSP has a 60% match rate with a DSP, the overall publisher to DSP match rate is only 42% (i.e. 60% of the SSP-to-publisher 70% match rate). From the DSP’s perspective, it may make sense to only bid on the matched 42% of inventory coming from the publisher, saving on hardware costs by not listening to the unmatched traffic.

Historically, publishers only deploy syncs with partnered SSPs, increasing the user matching with the SSPs. This however also has the side-effect of allowing the SSP to control matching with all other downstream partners. This may not be the best way to achieve the highest match rates. An alternative approach is to include a direct sync with the downstream DSP partners in addition to the SSPs, increasing the match rates with media buyers.

By syncing data between the publisher and every downstream trading partner, such as its top trading DSPs, the publisher can match user data directly with the DSP, improving the match rate and revenue with those key trading partners. Publishers on the MediaGrid using this approach have seen downstream match rates increase almost threefold, with revenue following a similar path.

Measurement for success

When looking to create an optimal header bidding setup, publishers should track as many health metrics as they can. Improving just one of them can increase revenue considerably. And because these metrics can make such a large impact on their business, publishers should not be shy about asking their SSP partners for input, data access, or support.

Armed with more data and solid benchmarks, publishers should create an on-going testing program that regularly and intelligently experiments with the header setup to find the optimal balance for maximum yield (which will change over time).

Liked this article?

Subscribe to the InContext newsletter to get insights like this delivered to your inbox every week.