You only have to open your Twitter feed or scan the latest headlines to know that it’s a perilous online climate for brands. A volatile news cycle means it’s increasingly hard for marketers to find brand safe inventory to carry their ads. Even premium publishers are running news content that some audiences may find controversial with increasing public pressure on brands to display social responsibility.
For publishers who traffic in newsworthy, attention-grabbing, informative content this presents a challenge and an opportunity. Every brand has different sensitivities, and content that could be anathema to one might be pure gold to another. However, in this complex climate, publishers need to carefully consider how to grow their revenue and maximize the value of their inventory while delivering the news their readers expect along with the brand safety marketers need. New research points to a solution.
Finding an Optimal Balance
A new study from Integral Ad Science and a major media publisher found that IAS publisher data could be used to manually optimize placements to target inventory that met advertisers’ strict brand safety standards. This tactic allowed the publisher to secure inventory suitable to a major technology advertiser and unlock the value of its young tech-savvy audience. To make the process scalable to other advertising partners the publisher used IAS’ publisher optimization tools to target desirable inventory without increasing the size or workload of its publisher-optimization team.
Automating delivery and prioritizing serving ads adjacent to content that matched the advertiser’s specific brand safety needs resulted ina dramatic decrease in wasted impressions. Previously nearly 7% of all campaign impressions had been deemed too risky for this specific brand. After implementing and inventory optimization strategy guided by IAS verification data, that result fell to just 0.2% of all campaign impressions, allowing the publisher to recover substantial revenue, meet advertiser expectations more efficiently, and increase the total value of its premium inventory.
Publisher Priorities
Although brand safety is an industry-wide problem, it has fallen heavily on the shoulders of publishers alone to solve. Increased attention to the issue, driven by the tumultuous news cycle and a tense political climate, comes at a time when many publishers are seeing their revenue squeezed in other ways, leaving little margin for error.
With the duopoly consuming an ever-larger share of total digital advertising dollars, it’s become critical for publishers to maximize the value of their inventory to ensure a healthy bottom line. More importantly, the long-term health of a media publishing business depends on securing the trust of major advertising partners and meeting their inventory quality needs consistently and efficiently.
For some publishers, meeting the challenges of brand safety has meant sanitizing content or allowing advertiser demand to override editorial judgment. This remediation has created unnecessary tension between sales and editorial teams forcing publishers to choose between quality and revenue. Optimization tools can help publishers to better understand their own inventory by mapping impression delivery to ensure that unique advertiser needs are met without compromising content quality. For publishers to succeed, a resolution that benefits both advertisers and the end consumers is key.
Most publishers are well aware of the control that Google has long wielded over the digital advertising industry, ever since Google’s acquisition of the now-rebranded DoubleClick back in 2008. The damages caused by Google’s anticompetitive effects are difficult to quantify. However, one of the most concrete manifestations of Google’s monopoly lies in the rise of header bidding — which is literally a hack that the industry has devised to circumvent Google’s powerful position in auctions.
With header bidding, rival exchanges like AppNexus worked with publishers to increase competition for their ads — competition that was previously suppressed by Google. The higher yield that publishers have seen by implementing header bidding represents the quantifiable damages of Google’s hegemony.
The rise in header bidding was heralded as a win for publishers when it came to taking back some control for themselves. Unfortunately, header bidding represented merely a temporary victory. And, ironically, the latency that header bidding introduced may be the leverage Google will use to deliver its next blow against publisher independence. Unless publishers do something about it, that is.
Innovation Stifled, Control Lost
Google’s acquisition of DoubleClick set the stage for its monopoly in the realm of digital advertising. In order to maintain control over each ad impression, Google set up two moats around this business: It made ad serving a low- to no-cost proposition, and it ultimately tightly coupled the use of its ad server and Google’s AdX demand. The ability to even consider adoption of another ad-serving platform was virtually eliminated for the majority of online publishers, which could not afford to lose a steady stream of AdX revenue.
OK, so Google gave publishers a free ad-serving tool and a revenue stream to go along with it. What’s the problem?
The problem, as with all monopolies, is that Google gained an inordinate amount of control. In this case, that control is wielded over the independent publishing landscape. Google’s power stifles innovation and restricts publishers’ options for controlling their own destiny.
Google’s “Solutions” Represent More Control Over Publishers
Prior to header bidding, publishers loaded third-party JavaScript tags in a very slow waterfall after loading DoubleClick. With header bidding, third-party bids arrive beforehand and compete against other line items inside DFP. Unfortunately, this hack of implementing bidding outside of the ad server slows down publisher sites, introducing more latency on top of an already not-so-fast DFP.
Many in the industry (like Amazon) are championing server-side bidding as the antidote to header bidding’s latency. But there are problems: Server-side bidding — no matter whose solution — isn’t a comprehensive cure for header bidding’s latency. It still represents a slow two-step process: retrieving the bid and calling the ad server.
Google’s “answer to header bidding” is Exchange Bidding, which integrates the auction and ad serving so the two-step process is eliminated. This truly is a solution from a technology standpoint, but Exchange Bidding puts publishers right back under Google’s thumb.
Forward-looking publishers are skeptical of Exchange Bidding, as they should be. But they face a tough choice: Give in to Google and forfeit control. Or don’t give in and risk Google seizing control under the guise of solving the site latency problems for them. That’s because Google’s monopoly empire also includes Chrome and search. Google can and will enforce ad format and latency rules in Chrome and press its search monopoly on publishers via Accelerated Mobile Pages (AMP). Via Chrome and AMP, Google has the tools to keep tweaking its approach until it becomes just good enough and easy enough to drive capitulation among publishers. Over the long term, it’s a no-win situation.
Time to Put Up or Shut Up
Publishers must not give up the ground they’ve gained in breaking with Google through header bidding. But to keep moving forward, they must solve for user experience issues caused by the site latency they’ve introduced in a way that decreases their reliance on Google.
Publishers must seek and place value in alternative solutions. No longer can they embrace every potential cure that Google offers for their ills no matter how much easier that path may appear to be. Publishers must solve their user experience problems on their own — before Google steps in and solves them for them, again at the expense of publisher independence. There are alternatives, and there are paths forward that don’t rely on putting all of a publisher’s eggs in the Google basket.
It’s time for publishers to take control of their own destinies, once and for all.
One of the least understood portions of the web is becoming one of the most important: dark social. And as the calendar turns from Halloween to the Day of the Dead, it’s time to shine a light into the darkest corner of the web to understand how people are sharing content via messaging services, email, texting, and other platforms that don’t show up easily on analytics programs.
For a long time, referrals that showed up on Google Analytics as “direct” were a mystery. Could that many people really be typing in long URLs into their browsers? No. Instead, they were cutting and pasting URLs into an email to friends, or WhatsApp or Facebook Messenger. Those referrals became known as “dark social” because of our ignorance about their true source. And with public sharing on Facebook and other platforms down, dark social is only growing more important, with an estimated 84% of all sharing, according to RadiumOne.
But there are ways to better gauge where those dark social referrals are coming from. There are also ways to reach people on messaging apps, through chatbots and targeted promotions. Even more important, reaching people on dark social requires a new way of thinking of them in their “tribes” or personal space where gauging sentiment might be more important than marketing a product directly.
The Rise of Dark Social
They are like two ships passing in the night: the rise of messaging apps and the decline of public social sharing. Just looking at Facebook user statistics gives us a stark picture:
From November 2014 to September 2017, Facebook Messenger users went from 500 million to 1.3 billion.
From December 2013 to December 2017, WhatsApp users went from 400 million to 1.5 billion.
From Q1 2017 to Q2 2018, Facebook interactions went from 29.1 billion to 12.8 billion.
Yes, 12.8 billion is still a lot of interactions, but you the trend is clear. People are losing trust in social networks and flocking to private messenging services.
As Arik Hanson at Business2Community noted, even Facebook’s Instagram is thinking beyond the public feed. “Let’s look at the direction they’re going: Investing heavily in Stories and direct messaging (both “private” for the most part, when it comes to engagement). Many of the new features added recently revolve around these two areas: poll sticker, slider stickers, polling via DM. These are the areas Instagram is focused on – not the public-facing feed as much.”
How to Get a Handle on Dark Social
There are ways to bring dark social into the light. While vanilla Google Analytics shows dark social traffic, there are ways to segment pages to see which ones are getting the most action from dark social. (99Signals has a good primer on that here.)
And there are always advanced tools like GetSocial and AddThis that can help identify which pages are getting hit the most from dark social. They also show where some of those sources are.
However, tracking down dark social the same as regular social referrals might not be the point. As Alpha Group’s David Cohn points out on MediaPost, “part of dark social’s appeal is its ‘off the grid’ nature, [so] we wouldn’t want to throw the baby out with the bathwater by looking for too much data and meaning among the noise.”
Taking Action to Reach Users
So how does a marketer or publisher get a return on investment by reaching folks on messaging apps, texting or emails? Cohn says the key is to figure out basic demographics on the platforms and the groups who congregate there. He uses the example of a foodie group in New York that could start to send more traffic to food publications and restaurants. “A marketer or publisher wouldn’t know exactly whom to target, but they would have a place,” he wrote. “And they would recognize this community as a gathering place for worthy targets.”
One example from India shows how ROI can grow when targeting dark social. When streaming service iflix wanted to increase its users, it found that its most loyal customers had one important thing in common: They liked sharing viral content and celebrity gossip on messaging apps. According to Business Today’s Sonal Khetarpal, iflx targeted messaging apps users. They converted 1 million more paid subscribers in six months. And the cost of acquisition dropped from $25 to $3 per person.
While it’s true marketers are tiring of social media planning that moves from one platform to the next each season, they’re going to have to get serious about messaging apps and dark social. More folks will have to take chatbots seriously, or just better targeting to users of messaging apps. Otherwise, they will miss a massive, growing opportunity as more people become disaffected with public social media spaces.
Digital advertising fraud continues to be an industry-wide problem. Estimates of dollars lost to fraud range from $6 billion to $16 billion annually.Much of the conversation has been on preventing fraud in open ad exchanges on a national level. But digital ad fraud can affect anyone who buys and sells digital advertising, including local and national publishers and advertisers.
With local markets becoming a target, the ad fraud crisis might seem to be spiraling out of control. But there is hope. The war on ad fraud is winnable. National and local marketers have the power to steer their budgets toward legitimate websites with human audiences. But how do we get there?
Here we unravel this often confusing and complicated topic touncoverhow ad fraud is affecting local markets and what publishers can do to minimize its impact.
Why local markets?
Fraudsters go where the money is. This year in the U.S., mobile ad spending is expected to surpass television advertising. Pair thisinflux of money witha platform where anti-malware and ad fraud detection is immature or non-existent, andmobile advertising platforms and local markets become prime targets for ad fraud.
How does local ad fraud happen?
Bad actors exploit one of the strengths of local markets: local audiences. Recent research from BIA/Kelsey estimates that out of $76 billion in local ad spending, almost $3 billion is lost to geotargeting ad fraud annually.Marketers want to reach their audiences on their mobile devices when they are in specific locations. Fraudsters take advantage of the appeal of geotargeted advertising by faking a device’s location. This means ads run on “devices” that declare GPS coordinates, but that data can be easily faked, allowing fraudsters to steal from locallytargeted ad budgets.
Dr. Augustine Fou, cybersecurity and ad fraud researcher, in partnership with BIA/Kelsey,conducted an experiment that compared mobile ad performance in programmatic campaigns targeting Houston during a hurricane and arbitrary market with a very targeted audience: Butte-Bozeman, Montana.
The ad campaigns specifying these two geographic areas were successfully executed by the programmatic exchanges, but the results were not logical. For example, 100 percent of the ads were loaded on Android devices and all ad impressions were generated at 5 a.m. Houston time and 4 a.m. Bozeman time. These findings, among others, show that the app usage in these campaigns did not follow the patterns oftypical human behavior and werethe result of fraudulent activity.
This initial test was expanded to include the geotargeting of 16 DMAs through programmatic media buying. While there were variations in the levels of fraud between DMAs, the results showed similar rates of ad fraud (shown in dark red on the following chart) throughout all 16 markets.
“Local publishers have real audiences—the people who live and work in the communities and consume the content they create,” explained Fou. “Marketers need to understand the risks of ad fraud in geotargeted campaigns and consider spending their ad budgets directly with local publishers to avoid those risks.”
What can publishers do?
Legitimate local publishers work hard to create quality content that attracts real human audiences. Publishers who can prove that they have quality, human audiences will be the winners as marketers look to reward those who proactively combat fraud.
Knowledge is power. As more money is spent to reach local audiences, publishers must pay attention to how ad fraud is occurring in their markets and demonstrate that their sites are safe places for advertisers. Understanding how ad fraud occurs and explaining to marketers the issues that make them vulnerable is an important first step, but more action must be taken.
Here are a few steps publishers can take to bring us closer to minimizingad fraud in the buying/selling process:
1. Follow Best Practices
Know your business practices for monetizing your site, sourcing traffic and choosing vendors, and educate your staff on best practices. Steps to providing transparency include implementing Ads.txt and having a third-party audit.
2. Use Accredited Vendors
Implement tech to prohibit ads from being served to known bots.
3. Actively Manage Media Monetization
Dedicate staff to review traffic patterns and sources.
4. Understand Traffic Sources
Revisit and scrutinize current traffic sourcing practices.
If you want to learn more about what you can do to help, download the AAM white paper Three Truths to Confront the Digital Ad Fraud Crisis. By looking deeper into ad fraud and how transactions take place within the ecosystem, it’s clear that ad fraud – on both national and local levels – is conquerable.
Mozilla’s user research, and research conducted by universities and independent experts, consistentlyshows that most people who use the web do not approve of many of the ways in which their personal data is collected and shared for targeted advertising. In fact, people consider some practices to beinvasive,creepy, and even scary. User tracking can also create real harm, including enablingdivisive political advertising andaffecting health coverage decisions. To help make the web more trustworthy for our users, Firefox is developing, testing, and deploying new privacy features to protect them.
As a leader of Firefox’s product management team, I am often asked how Mozilla decides on which privacy features we will build and launch in Firefox. Since transparency is fundamental to our way of doing business, I’d like to tell you about some key aspects of our process, using our recent Enhanced Tracking Protection functionality as an example.
Mozilla is a mission-driven organization whose flagship product, Firefox, is meant to espouse the principles of our manifesto: “A Pledge for a Healthy Internet.” Firefox is our expression of what it means to have someone on your side when you’re online. We believe in standing up for consumers’ rights while pushing the web forward as a platform that is open and accessible to all. As such, there are a number of careful considerations we must weigh as part of our product development process in order to decide which features or functionality make it into the product, particularly as it relates to user privacy.
A focus on people and the health of the web
Foremost, we focus on people. They motivate us. They are the reason that Mozilla exists and how we have leverage in the industry to shape the future of the web. Through a variety of methods (surveys, in-product studies, A/B testing, qualitative user interviews, formative research) we try to better understand the unmet needs of the people who use Firefox.
Another consideration we weigh is how changes we make in Firefox will affect the health of the web, longer term. Are we shifting incentives for websites in a positive or negative direction? What will the impact of these shifts be on people who rely on the internet in the short term? In the long run? In many ways, before deciding to include a privacy feature in Firefox, we need to apply basic game theory to play out the potential outcomes and changes ecosystem participants are likely to make in response, including developers, publishers and advertisers. The reality is that the answer isn’t always clear-cut.
Balancing the pros and cons
Recently weannounced a change to our anti-tracking approach in Firefox in response to what we saw as shifting market conditions and an increase in user demand for more privacy protections. As an example of that demand, look no further than our Firefox Public Data Report and the rise in users manually enabling our original Tracking Protection feature to be Always On (by default, Tracking Protection is only enabled in Private Browsing):
Always On Tracking Protection shows the percentage of Firefox Desktop clients with Tracking Protection enabled for all browsing sessions. (Note: the setting was made available for users to change with the release of Firefox 57.)
The optimal outcomes are clear. People should not be tracked across websites by default and they shouldn’t be subjected to abusive practices or detrimental impacts to their online experience in the name of tracking. However, the challenge with many privacy features is that there are often trade-offs between stronger protections and negative impacts to user experience. Historically this trade-off has been handled by giving users privacy options that they can optionally enable. We know from our research that people want these protections but they don’t understand the threats or protection options enough to turn them on.
We have run multiple studies to better understand these trade-offs as they relate to tracking. In particular, since we introduced the original Tracking Protection in Firefox’s Private Browsing mode in 2015, many people have wondered why we don’t just enable the feature in all modes. The reality is that Firefox’s original Tracking Protection functionality can cause websites to break, which confuses users. Here is a quick sample of the website breakage bugs that have been filed:
Bugs filed related to broken website functionality due to our original Tracking Protection.
In addition, because the feature blocks everything, including ads, from any domain that is also used for tracking, it can have a significant negative impact on small websites and content creators who depend on third-party advertising tools/networks. Because small site owners cannot change how these third-party tools operate in order to adhere to Disconnect s policy to be removed from the tracker list, the revenue impact may hurt content creation and accessibility in the medium to long-term, which is not our intent.
Finding the right tradeoffs
The outcome of these studies caused us to seek new solutions which could be applied by default outside of Private Browsing without detrimental impacts to user experience and without blocking most ads. This is exactly what Enhanced Tracking Protection, being introduced in Firefox 63, is meant to help with. With this feature, you can block cookies and storage access from third-party trackers:
The feature more surgically targets the problem of cross-site tracking without the breakage and wide-scale ad blocking which occurred with our initial Tracking Protection implementation. It does this by preventing known trackers from setting third-party cookies — the primary method of tracking across sites. Certainly consumers will still be able to decide to block all known trackers under Firefox Options/Preferences if they so choose (note that this may prevent some websites from loading properly, as described above):
Sometimes plans change. So we test
As part of our announcements that ultimately led to Enhanced Tracking Protection, we described how we planned to block trackers that cause long page load times. We’re continuing to hone the approach and experience before deciding to roll out this performance feature out to Firefox users. Why? There are a number of reasons: The initial feature design was similar in nature to the original Tracking Protection functionality (including ad blocking). But blocking only occurred after a few seconds of page load. In our testing, there was a high degree of variability as to when various third-party domains would be blocked (even within the same site). This could be confusing for users since they would see blocking happen inconsistently.
A secondary motivation to block trackers and ads on slow page loads was to encourage websites to speed up how quickly content loads. With the tested design, a number of factors such as the network speed played a part in determining whether or not blocking of trackers and ads would occur on a given site. However, because factors like network speed aren’t in the control of the website, it would pose a challenge for many sites, even if they did their best at speeding up content load. We felt this provided the wrong incentive. It could cause sites to prioritize loading ads over content to avoid the ads from being blocked, a worse outcome from a user perspective. As a result, we are exploring some alternative options targeted at the same outcome, much faster page loads.
Open, transparent roadmaps
We work in the open. We do it because our community is important to us. We do it because open dialog is important to us. We do it because deciding on the future of the web we all have come to rely upon should be a transparent process — one that inherently invites participation. You can expect that we will continue to operate in this manner, being upfront and public about our intent and testing efforts. We encourage you to test your own site with our new features, and let us know about any problems by clicking “Report a Problem” in the Content Blocking section of the Control Center.
I hope this glimpse into our decision-making process around Enhanced Tracking Protection reaffirms that Mozilla stands for a healthy web — one that upholds the right to privacy as fundamental.
About the Author
Peter Dolanjski is the product lead for Firefox on Windows, Mac, and Linux. Along with a team of like-minded internet advocates and a vibrant open source community, he fights for the fundamental right to online privacy through innovative browser features and meaningful privacy defaults.
The Upfronts in New York were always about making big advertising deals before the TV season kicked off. Then came the NewFronts, which focused on digital offerings and innovative formats. Now comes the NewFronts West, an extension of the NewFronts that took place in Los Angeles this month, and which have evolved even further from the original Upfronts. Gone was the talk of making big ad deals. In its place was discussion of new platforms, new formats, and new ways to reach younger audiences with influencers, podcasts and, of course, online video.
In short, the personality and networking surrounding this event fits in perfectly with the ethos of Silicon Valley and Hollywood, the major West Coast hubs that have insinuated themselves into the ad market like never before.
Here are five big takeaways from the inaugural edition of the NewFronts West:
1. Big Ad Deals Take a Back Seat to Relationships and Emerging Models
Closing major ad sales was hardly the focus at this event. One reason is was that it took place outside of the traditional media buying season. Also, because there was a mix of digital-only and legacy media publishers, it offered a chance for all parties to stand out with new types of offerings.
In fact, it was the kind of space where people new to the game could come in and introduce themselves. And it was a chance to think less about traditional models of advertising and more about emerging prototypes that are just beginning to gain traction, like branded content, custom sponsorships and opportunities for publishers within Instagram’s IGTV. These avenues are especially interesting to advertising execs as ad-free platforms take hold and audiences become more and more fragmented.
Even the New York Times, one of the more well-known heavy hitters at the event, saw the NewFronts West less as a space to close deals and more as a brand marketing opportunity to talk to people about how the Times works with advertisers. In this way, the event was much more future-focused. The hope, it seemed, was that building relationships now would reap benefits down the line.
2. Podcasts Are Not Slowing Down
Podcasts have exploded over the past few years, with nearly every publisher producing some or at least considering them. But after BuzzFeed and Slate’s Panoply cut back on offerings, it was easy to think that perhaps the hype had gone too far. Judging by the NewFronts West, those cutbacks might only be a hiccup.
The Los Angeles Times, Gallery Media Group and Ellen Digital all announced new podcasts. The L.A Times announced new podcasts about a drag racer, “Big Willie,” and a reporter’s search for a hospital patient called “Room 20,” following in the footsteps of its popular “Dirty John” podcast co-produced with Wondery.
Podcasts have always been favored for the intimacy that voice offers, but a compelling host is also arguably the best brand influencer for an audience. With 92% of consumers leaning toward product recommendations, according to a FameBit presentation, podcasts are a more personal way for hosts to promote products.
3. Vice Media’s Brand Safety Splash
While there have been a lot of efforts around promoting brand safety for advertisers, Vice Media made a big splash at the NewFronts West by countering widespread use of keyword blocking by pushing a more contextual approach from Oracle Data Cloud.
Vice executives released the findings of an 18-month long study abut keyword blacklists, which are meant to flag potentially objectionable content for advertisers. Turns out they place LGBTQIA-related keywords high on the list, including “gay,” “transgender” and “bisexual.” The study also found that the keywords “Asian,” “Muslim,” and “interracial” also appear at the top of these blacklists.
For a media brand that prides itself on diverse and inclusive programming (as its executives said), Vice’s announcement made a big statement. Vice then said it would be testing Oracle Data Cloud’s contextual brand safety solution, which offers deeper analysis and scores content based on its storytelling context. They also called for other publishers to follow suit. Now we’ll have to see if Oracle Data Cloud stands up to its reputation.
4. Advertisers Will Have to Work Harder to Reach Audiences
Part of why the West Coast’s approach to the NewFronts is so attractive is that advertisers have realized they have to work much harder to secure the attention of their audiences. Thus, they’re more willing to try new platforms and experiment with techniques that are still nascent like Alexa skills, brand-sponsored podcasts or e-commerce enabled videos. Platforms like Netflix and HBO have made commercial-free viewing experiences de rigeur, and readers on the internet can also easily block ads they don’t want to see.
What to do? Create advertising that consumers can’t not pay attention to is the answer. Advertising executives are anticipating that the traditional model of advertising will evaporate sooner rather than later. So, they’re trying to stay ahead of the curve for their own survival and efficiency. That’s a no-brainer. Having influencers on IGTV or other social platforms pitching products for them has become one way to do that.
5. Snapchat Pushes More Scripted Shows
As Facebook has struggled with battles against misinformation (and lost data), Snapchat continues to move forward with its own programming. At NewFronts West, the L.A.-based social app announced a host of new serialized, scripted shows for the launch of what it’s calling Snapchat Originals. It feels a little bit like Netflix’s foray into originals, except the projects will be shot in a vertical format and feature six-second unstoppable ads. It’s definitely a big endeavor, and one to watch – even if it means that other tech giants may want to, ahem, copy this strategy as well (looking at you, IGTV).
The bottom line coming out of this year’s NewFronts West is that it’s much more important to be strategic about new advertising models and partnerships than it is to partner with name-brand entities and existing prototypes. While tried-and-true advertising formats may sound good in theory and work for the time being, they may not be viable in the future. This event gave publishers a chance to zero in on those emerging ideas. If an event like the NewFronts West can continue to present itself as the more future-forward alternative to the East Coast’s spring NewFronts — and reiterate the fact that not all agencies and ad shops are in New York — it has the chance to develop an even larger following for future shows.
More marketing companies than ever before have in-house agencies to build brand strategies and brand creatives (traditional and digital). In fact, according to a new ANA report, The Continued Rise of the In-House Agency, 78% of client-side marketers report having an in-house agency, a sharp increase compared to 58% in 2013 and 42% in 2008. Growth of in-house agencies is recent with 44% of respondents reporting their in-house agency was established within the past five years. Eight in 10 marketers (79%) are highly satisfied with their in-house agencies. A full 20% are completely satisfied.
In-house agencies provide many benefits, with “cost efficiencies” and “having better knowledge of brands” ranking top. “Institutional knowledge” and “dedicated staff” rank second highest as benefits.
In-house agencies provide a range of services, including content marketing, creative strategy, data/marketing analytics, media strategy, programmatic and social media (both creative and media). It’s not surprising that 90% of respondents report their workload in-house agency is increasing compared to a year ago. At least two-thirds cite that their workload is increasing “a lot.”
Key trends include:
36% of companies say they are bringing media strategy and planning functions in-house, compared to 22% in 2013.
30% of respondents have in-house programmatic capabilities.
79% of respondents have in-house video production capabilities.
The biggest challenge for in-house agencies is managing growth, managing workflow and scaling and managing resources. Importantly, in-house agencies must remember to step outside the box and include an outside perspective of the brand.
While in-house agencies were once the exception, they are now the norm. The survey identifies the benefits of in-house agencies in four key areas:
Strategy: confidentiality, better knowledge of brands and institutional knowledge.
Creative/tradition media: creative expertise, less talent turnover and dedicated staff.
Creative/digital media: speed, nimbleness, integration is easier and creative expertise.
Media Planning/buying: cost savings/efficiencies, full ownership of marketing data and greater control.
Despite the growth of in-house agencies, marketers still work with external agencies. In fact, nine in 10 respondents report working with an outside agency. Nevertheless, marketers continue to invest and develop their expertise because in-house agencies allow them to build infrastructures that help weed out inefficiencies.
YouTube is currently the most popular platform for video content. So, it’s no surprise that when Instagram released IGTV, its stand-alone app for long-form vertical video, everyone’s first reaction was to compare the two. And YouTube pretty much always came out on top.
However, in the past year, YouTube has been under fire for brand safety issues. In fact, there was a point when 250 brands stopped advertising on YouTube at the same time in order to protect their assets and brand image. To date, many advertisers still believe that YouTube has done a poor job of preventing its advertisers’ brand safety issues, even though they have gotten better at the response.
This led us to host a panel discussion on October 4th to ponder a key question: With YouTube in its most vulnerable state, and given its ongoing brand safety concerns, is it IGTV’s time to steal advertisers away?
The discussion, entitled “Instagram TV vs. YouTube: Who Will Win the War?”, was moderated by Kerry Flynn of Digiday. The panel consisted of five advertising experts: Kaydee Bridges, VP of Digital & SM Strategy at Goldman Sachs, Elijah Harris, VP, and Head of SM, US at Reprise Digital, Noah Mallin, Managing Partner at Wavemaker North America, Brittany Richter, VP and Head of SM, US at iProspect, and myself.
The discussion offered three key takeaways:
1. YouTube still wins the popular vote.
The majority of panelists agreed that brands interested in video content should be on YouTube rather than Instagram’s IGTV. They pointed out YouTube’s benefits: It’s cheaper, long-form content performs better on YouTube, and it’s better for sharing. Elizabeth Richter, iProspect’s Head of U.S. Social Media, offered a different point of view, arguing that Instagram may work for some brands: “If a brand is struggling with their messaging, or just getting started, Instagram’s IGTV is best because they can experiment with different brand messages and see how consumers respond,” she commented.
2. Instagram’s IGTV is transforming and expanding in promising ways.
For Elijah Harris of Reprise Digital, the early IGTV experience was underwhelming. Its focus was to share long-form content from those he followed on Instagram – something he didn’t enjoy. However, Harris added that the platform currently offers more opportunities worth exploring, raising his confidence in its potential for the future.
3. Instagram’s IGTV isn’t quite ready to compete with the big boys.
Elijah Harris suggested that, if it wants to compete with YouTube, IGTV needs meta tags and better discoverability. To go up against Snapchat, he added, it needs curated content and multi-channel network (MCN) participation.
Noah Mallin of Wavemaker believes that, to compete with Snapchat, IGTV needs “breakout content,” even suggesting that it might fill the void left by the demise of Vine. (“I’m still mourning it,” he confessed). Mallin added that YouTube is “stuck in desktop mode” and must quickly adapt for mobile users.
The consensus was clear. IGTV wants to be the most popular platform for video content. However, it still has a lot of growing up to do. Perhaps Instagram will take note of our panel discussion, and we’ll see a more mature IGTV before we know it.