Recently, Facebook CEO Mark Zuckerberg published a new company manifesto called “A Privacy-Focused Vision for Social Networking.” In it, Zuckerberg acknowledged Facebook’s poor privacy track record: “frankly we don’t currently have a strong reputation for building privacy protective services.” Who knew he had such a gift for understatement?
Zuckerberg outlined a new approach for the company, in which Facebook will build services that allow people to communicate privately. Apart from acknowledging the extreme examples of “child exploitation, terrorism, and extortion,” Zuckerberg doesn’t say how he will address the very real concern that he may be enabling the more problematic aspects of Facebook’s effects on society to flourish without scrutiny. Nor does he say that he is changing Facebook’s basic model of hoovering up data about its users and monetizing it through marketing.
However, Zuckerberg is clearly responding to growing concerns about how the digital world impacts our privacy. I share these concerns. I also have a suggestion for how they should be addressed.
The Problem With Privacy Policies
If you are like most people, the answer to these questions is “no.” According to Deloitte, 91% of us don’t read the terms of service. Most of us are not equipped with the knowledge base or willing to spend the time needed to navigate through privacy policies and privacy settings. In 2008, one study estimated it would take a typical internet user 244 hours a year to do so! And it is surely worse today.
The Cost of Free
We sort of get that when we use a “free” service, the real “cost” of that service involves giving a company access to data about ourselves. But we only sort of get it because we don’t really know what we are giving up, and therefore we are not in a position to assess whether it’s a good deal or not. Spoiler alert: it’s not.
For starters, the value exchange between consumers and service providers is completely opaque, with only one side of this transaction really understanding what is going on. Without knowing the terms of a deal, how can you be sure it’s fair?
As my colleague Pooja Midha says, with transparent value exchange, everybody can win. Before the rise of the internet, the value exchange for all advertising was relatively clear: The ads were the cost (or part of the cost) of access to the content. If you wanted the content, you “paid” for it by giving your time and attention to the ads. Sure, we can turn the pages of the magazine, or grab a sandwich at a commercial break, but that behavior was expected and factored into the price. Both consumers and marketers had a pretty clear idea what they were getting and what they were giving up.
The Data Dilemma
With the internet, that all changed. Free services like those offered by Facebook and Google didn’t offer content, they offered functionality. And they offered it in exchange for consumers’ time and attention paid to advertising. Or, at least, so it seemed. But the unique, bi-directional nature of internet-based services allowed for a second source of value to flow to a service provider – data. And data is where the real money is.
A recent survey asked consumers whether they would be willing to share their data for a price and what price. A majority (57%) said it was worth a minimum of $10, while 43% valued it at less than $10. The higher the income, the more likely they were to want more for their data.
This stands in stark contrast to the true value of their data. Facebook’s average annual revenue per users in the US and Canada was reported to be near $112 in 2018. Think about that. US consumers think their data is worth $10 or less, and just one company is generating $112 from their data.
Transactions where consumers are faced with huge information imbalances are often targets for government regulation. It is no surprise to see the GDPR efforts in Europe and the activity happening in California with the passage of the California Consumer Privacy Act, and the Governor’s recent call for a data dividend to be paid to consumers. While renewed interest in privacy and more transparent value exchange is great, it doesn’t address the core issue. Let me explain.
Today consumers have to choose whether or not they are willing to abide by the myriad privacy policies of all the different companies they engage with over the internet. In my view, that is exactly backwards. It puts the burden on the party with the least information, and in the worst position to understand the details of what they are getting into.
This would be a huge step forward, empowering consumers, and making the digital world easier to navigate effectively, and the costs of doing so more transparent.
In a “my data, my rules” world the consumer is empowered and the value exchange is clear, with information and understanding similar on both sides. In a “my data, my rules” world, marketers can be confident that the data they are using is permissioned. This will reduce their exposure to brand damaging charges of “surveillance capitalism.” Like marketers, in a “my data, my rules” world, technology companies can engage with consumers more safely, knowing that the data they are collecting is permissioned, reducing the risk of ugly articles in the New York Times, and political fallout.
Personal Data Protection in Practice
There are two ways this change can happen. The first is government regulation. Think of it as the ‘do not call’ list on steroids. It may well be that this is the only way to flip current practice.
With one large player having established this as standard operating procedure, it would be hard for other companies not to follow. What consumer wants to use a service that refuses, as a matter of policy, to follow their preferences?
It is time for a change. Let’s stop hiding behind opacity and complexity. Instead let’s all live by a simple principle on the internet. My data, my rules.