Login
Login is restricted to DCN Publisher Members. If you are a DCN Member and don't have an account, register here.

Digital Content Next

Menu

InContext / An inside look at the business of digital content

Personalization without people: What happens when no one can track consumers?

November 14, 2017 | By Aram Zucker-Scharff, Director, Ad Engineering – The Washington Post @Chronotope

The alignment of new laws, reader advocacy, and technology has opened up a challenge to user tracking tools. While some express concern that an end to unbridled tracking will hinder the digital ecosystem, this is an enormous opportunity for publishers to take the lead in building the next generation of personalization technology. However, this evolution in personalization will need to be built on a foundation of editorial metadata, which will drive everything from video playlists to targeted advertising.

A new door opens

A new type of personalization that eschews user-based targeting is coming. In part this will be driven by the fact that many analytics, ad tech, and personalization-tech companies will be deeply affected by the EU’s General Data Protection Regulation (GDPR). An AdAge headline once proclaimed that the GDPR will “rip global digital ecosystem apart.” While that may be a bit alarmist, the GDPR will force companies operating within the EU, and the third party tools they use, to adhere to a strict opt-in for all tracking. It will also levy severe penalties on those companies within EU jurisdiction that fail to do so.

The omnipresent motivation behind the law, though, has failed to prompt the development of similar legislation in the U.S. Despite results and sentiment that suggest otherwise, targeting remains central to many marketing strategies. Brands have found user targeting programmatic campaigns less effective than they expected. Consumer groups have formed in protest of the functionality of user tracking in action (such as ads appearing with no regards to the content they appear on). Individuals on social networks find retargeting approaching some sort of uncanny valley — a point at which its very accuracy is becoming deeply discomforting. And we’re even seeing the start of a conversation around user targeting happening in Congress.

A victory for publishers

Publishers have a mission to treat their readers and viewers ethically. The good news is that smart publishers can (and do) run user-targeting related tools on that basis. The personalization of ads and news has become a significant trend, one that many are still chasing. However, the fundamental underlying technology is challenged by the GDPR. Even if publishers never conduct any business outside of the U.S., the vendors who power personalization tools do. We operate on literal reams of data but must face a future where comparatively little is available.

These oncoming shifts in the marketplace shouldn’t frighten publishers. They are likely to hurt the thousands of middleware third-party ad tech companies that have failed to deliver on user targeting for years now while skimming profits. A push to decrease both publisher and advertiser reliance on user targeting is an opportunity.

Metadata to the rescue

Publishers need to take a look at the new generation of tools that can provide the data needed for personalization on-page without ever tracking a user. Metadata standards are improving and adding detail. Our current tools consider article relationships mostly in keywords and categories but new ways of telling the story about a story could bring about a revolution in personalization.

Almost by accident, social media has pushed thousands of sites to adopt Open Graph, an RDF-based standard built to provide detailed site data. Search engines have long supported and rewarded structured data like hCard. Improvement in the Schema.org project, along with increased support of JavaScript Object Notation for Linked Data (JSON-LD) by a variety of platforms, has made it an increasingly promising standard. Unfortunately, The Schema Project, which is complex and lacking good documentation and in-action examples, has been challenging to adopt. Publishers have also lacked a clear reward for use. However, that is changing with the announcement of support by Google for the use of structured data for fact checking.

Regardless of how successful the fact checking markup project becomes, it demonstrates that page-to-page relational metadata is joining other complex metadata systems as part of the future of publishing. With privacy concerns on the rise, it behooves publishers to start considering these systems as part of the future of personalization.

A structured future

Beyond keywords and tags, there is an embarrassment of new options for metadata that can create a unique experience on each webpage more tailored to the moment the reader encounters an article than following them with cookies ever was. While a reader might have been shopping for shoes yesterday, what they read today may put them in a very different mindset. And the reader of today is a more useful target for personalization than the reader of yesterday.

What can we build on using enhanced metadata? Geographic coordinates could drive a set of recommendations even more relevant than attempting to geotarget the user. Article authorship has worked well for media companies where the byline promises a particular voice. We can build playlist systems that find their next videos through more than title keywords, looking at producer credits, length and related affiliate offers. Types of content or referenced urls in the body of an article can allow personalization tools to recommend other articles that share a particular format, or ads that sell the referenced type.

Planning beyond keywords

Taking advantage of these opportunities will require different ways of thinking about what everyone creates and how it breaks down. It won’t just be up to an SEO expert to drop tags on a page. News organizations will find that optimizing for search, social, or ads will require taking advantage of all the opportunities that complex metadata provides and operating within a larger plan for how metadata should be handled. The editorial and business sides will need to work together to consider the whole of outlets’ output, prioritizing approaches, and building out tools that automate and suggest metadata structures.

Owners of this process will need to consider personalization on a variety of factors that describe form, format, key ideas and digital objects. They’ll have to build out a framework on how articles connect to each other that will describe small universes of content. A site that takes full advantage of metadata structures can promise a richer experience for readers, viewers and listeners than any provided through cookie-based tracking, an experience based on in-the-moment intent.

Our current generation of overly-targeted ads and recommendations don’t just fail to perform, they’re creepy and overpriced. Our audiences deserve more and our ethics require that we provide it. We have the technology and industry pressure to deploy successful alternatives. Understanding, expanding and adapting the use of detailed metadata across the web will build better media companies and a better open and well-connected internet.


Aram Zucker-Scharff is the Director for Ad Engineering in The Washington Post’s Research, Experimentation and Development group. He is also the lead developer for the open-source tool PressForward and a consultant on content strategy and newsroom workflows. He was one of Folio Magazine’s 15 under 30 in the magazine media industry. He previously worked as Salon.com’s full stack developer. His work has been covered multiple times in journalism.co.uk and he has appeared in The Atlantic, Digiday, Poynter, and Columbia Journalism Review. He has also worked as a journalist, a community manager and a journalism educator.

Liked this article?

Subscribe to the InContext newsletter to get insights like this delivered to your inbox every week.