Mark Zuckerberg
Blog

Treat Facebook More Like Cigarettes, a Bit Like Violent Movies, and Not So Much Like Drugs

Facebook’s data sharing fiasco with Cambridge Analytica has dominated recent news. There is little doubt that Facebook deserves blame for bad practices and bad execution.

CEO Mark Zuckerberg has said as much during, and leading up to, congressional testimony. I have no contest on that front. What is problematic, however, are the numerous charges thrown at Facebook and the consequent remedies and policy changes that are suggested explicitly or by implication.

Treat Facebook More Like Cigarettes, a Bit Like Violent Movies, and Not So Much Like Drugs Facebook CEO Mark Zuckerberg testifies before a House congressional committee.

A particularly exemplary call is made in a recent op-ed in The New York Times with Zeynep Tufekci’s piece “We Already Know How to Protect Ourselves From Facebook” on April 9.

Tufekci argues that in past breaches, though minor, Facebook has merely offered words of apology and gone back to a profit-maximizing approach that does not respect its users’ data privacy. Indeed, she adds, Facebook “then works like mad to scuttle any legislation that might have a favorable impact on the core problem: how our data is harvested, used and profited from.” Tufekci claims that Facebook does so because it is a “reckless company” for whom user data is the fuel for its revenue engine.

You won’t get better outcomes with personal data by treating it like drugs.

No matter how evil you think Facebook is, or is not, the question is what should be done to protect users and reduce the societal cost of private data being thrown around in the public sphere or secretly to companies who will use it in detrimental ways?

In posing the question “What would a genuine legislative remedy look like?” Tufekci goes on to offer solutions that mainly involve placing restrictions on what and how data is shared by users on Facebook. She wants that personalized data be shared only under explicit “opt-in” consent, that people should be able to access what data is known about them (including through “all forms of inference”—a very tall order if you understand the power of computational inference), and that data expire after being used for limited purposes and time. She also asks for curbs on the use of aggregate, not just individual, data.

Doing so in multiple pages of fine print would not, for example, qualify as informed consent.

There are several problems with the suggested remedies. The first being that these suggestions are not so new as they are a repackaging of “informed consent,” a term that has dominated data privacy discussions for decades. The European General Data Protection Regulation (GDPR) notes that the concept of informed consent for data is inspired by its healthcare counterpart: merely checking a box is not enough, but the user must be properly informed of what data is being collected, what it will be used for, and who it would be shared with. Doing so in multiple pages of fine print would not, for example, qualify as informed consent.

With this background, the only notable thing about Tufekci’s suggestions is the curbs on exploitation of aggregate data, which makes the useful point that certain kinds of individual data sharing create a negative externality on other people.

But there’s a bigger problem with the approach of placing restrictions: it is the wrong remedy. It doesn’t work. You won’t get better outcomes with personal data by treating it like drugs. Think about how much the U.S. has been able to curb the use of harmful illicit drugs through banning their sale, use and transport. The National Institute of Drug Abuse provides data on the rise of illicit drug use (see chart).

Chart: National Institute of Drug Abuse

The evidence is even more glaring when you look at drug-related deaths.

Opiod Data Analysis by the CDC

Moreover, placing severe curbs on data relationships between firms and users is like saying that some movies inspire people to commit violent acts like robbing banks, and therefore movies should not depict violence or heists. Think of it: no “Ocean’s Eleven,” “Heat,” or even “A Fish Called Wanda.”

What if some firms employ user data in beneficial ways while also being great stewards of data? What if their users want the most painless ways of sharing data with them, rather than be bothered with explicit notices and approval contracts every time? I love it when my Google Now tells me that it is time to leave for my appointment and takes into account that I will bike to it—without me telling it so. At the same time, I control what private information I put out on the internet, such as my date of birth.

I’m not saying that informed consent is a bad idea. Yes, it is the firm’s job to help users become aware of the boundaries around use of their data.

The better analogy for Facebook is cigarette smoking.

But let the market—firms and their users — figure out what these boundaries are. We don’t ban violent movies. Instead, we rely on a movie rating system that labels movies along the level of violence, nudity, sex and so on. We don’t even necessarily rely on a single rating system. People are free to believe and rely on the MPAA rating system or IMDB’s or someone else’s.

The analogy would then be that firms choose whatever data practices they want. Rating agencies would designate a label or set of labels to describe these practices. And users’ web browsers would be smart enough to manage the data flow in ways that respect the users’ data privacy preferences. All of this in fact is not new and has already been operational for many years (re: “Platform for Privacy Protection” and data privacy preferences in your browser). But its adoption by internet users has been severely lacking.

What all this boils down to is that I agree with Tufekci’s observation that the real problem is with people’s data practices rather than the firms’. (Those are often bad, too, but only because we allow them to be.) With that, I view the suggestions offered by Tufekci and others as being in the spirit of “let’s protect people from themselves.”

Well that is fine.

But as a society and free-market democracy, we choose not to protect people from themselves. We know that cigarettes, fatty foods and sodas are bad for health and increase societal health costs. But we don’t ban them. We like the idea of reducing consumption and controling consumption in certain public areas (in the spirit of dealing with that negative externality). But we don’t ban cigarette production, sales or smoking. Instead, we require the sale of cigarettes to occur under a mandatory statutory warning and we curb the advertising of cigarettes. And it works! As data from the Centers for Disease Control shows (below chart).

Vexing problems sound simple with the right analogy. But Facebook (and online data sharing in general) is less like drugs and a bit like violence in movies (or your other favorite societal “bad”).

Bans don’t work. But we do use an information-driven rating system to help people decide what and how to consume.

The better analogy for Facebook is cigarette smoking.

We should use education and warnings to limit the damage. That is the core of a reasonable and workable solution: more warnings, education and self-control. Society and individuals would not be worse off from cutting social media time in half.

Instead of Facebook, go for a walk, watch a play, read a book or learn to program. Your personal data will be a lot safer.

Editor’s note: This blog originally appeared on Medium.