The real reason why Facebook and Google won’t change

Mark Zuckerberg ushered in the new year pledging to address the many woes that now plague his company by “making sure people have control of their information,” and “ensuring our services improve people’s well-being.”

As much as we may want to believe him, Zuckerberg’s sudden turn toward accountability is impossible to take seriously. The problems Zuckerberg cited, including “election interference” and “hate speech and misinformation,” are by-products of the features of social networks, not bugs. How do we explain Facebook’s years of ignoring these developments? Some headlines have blamed the internet. Others criticize Facebook’s management. A powerful November exposé in The New York Times describes Facebook’s executives as having “stumbled,” first ignoring warning signs of meddling during the run-up to the 2016 U.S. presidential election and then trying to conceal them. Other analysts conclude that the problem is Facebook’s size, arguing that it should be broken up into smaller companies. Unfortunately, none of these explanations brings us any closer to grasping the real issue.

Facebook is an exemplary company—if you are a fan of “surveillance capitalism,” my term for businesses that create a new kind of marketplace out of our private human experiences. They hoover up all the behavioral data they can glean from our every move (literally, in terms of tracking our phones’ locations) and transform it with machine intelligence into predictions, as they learn to anticipate and even steer our future behavior. These predictions are traded in novel futures markets aimed at a new class of business customers.

Surveillance capitalism was invented by Google more than a decade ago when it discovered that the “data exhaust” clogging its servers could be combined with analytics to produce predictions of user behavior. At that time, the key action of interest was whether a user might click on an ad. The young company’s ability to commandeer its data surplus into click-through prognostications became the basis for an unusually lucrative sales process known as ad targeting. In 2008, when Facebook faced a financial crisis, Zuckerberg hired Google executive Sheryl Sandberg to port over this scheme. (Facebook and Google did not respond to a request for comment.)

Google’s and Facebook’s stunning success has inspired companies in insurance, retail, healthcare, finance, entertainment, education, transportation, and more to chase eye-popping surveillance business profit margins. Surveillance capitalists depend on the continuous expansion of their raw material (behavioral data) to drive revenue growth. This extraction imperative explains why Google expanded from search to email to mapping to trying to build entire cities. It’s why Amazon invested millions to develop the Echo and Alexa. It’s why there’s a proliferation of products that begin with the word smart, virtually all of which are simply interfaces to enable the unobstructed flow of behavioral data that previously wasn’t available, harvested from your kitchen to your bedroom.

Each of the issues that Zuckerberg now says he wants to fix have been longtime features of the Facebook experience. There are no less than 300 significant quantitative research studies on the relationships between social media use and mental health (most of them produced since 2013). Researchers now agree that social media introduces an unparalleled intensity and pervasiveness of “social comparison” processes, especially for young users who are almost constantly online. The results: amplified feelings of insecurity, envy, depression, social isolation, and self-objectification. One major study, published in the American Journal of Epidemiology, concluded: “Facebook use does not promote well-being. . . . Individual users might do well to curtail their use of social media and focus instead on real-world relationships.”

Indeed, Facebook has avidly sought to master social-comparison dynamics to manipulate human behavior. A 2012 article based on a collaboration between Facebook data scientist Adam Kramer and academic researchers—”A 61-Million-Person Experiment in Social Influence and Political Mobilization”—released in the journal Nature, detailed how the company planted voting-related cues in the News Feeds of 61 million Facebook users to leverage social-comparison processes and influence voting behavior in the run-up to the 2010 midterms. The team concluded that its efforts successfully triggered a “social contagion” that influenced real-world behavior, with 340,000 additional votes cast as a result.

Even as that study’s publication unleashed a fierce public debate, the same Facebook data scientist was already collaborating with other academic researchers on a new study, “Experimental Evidence of Massive-Scale Emotional Contagion Through Social Networks.” This time, 689,003 people were exposed to positive and negative emotional cues in their News Feeds. The research team celebrated its success in manipulating users, concluding in its 2014 study: “Emotional states can be transferred to others via emotional contagion, leading people to experience the same emotions without their awareness.”

When it comes to elections, others have learned to exploit these harmful methods for their own political ends. The 2016 disinformation efforts around the U.S. and U.K. campaigns were the latest manifestation of a well-known problem that had disfigured elections and discourse in countries as diverse as Indonesia, Colombia, Germany, Myanmar, Uganda, Finland, and Ukraine.

These histories illustrate Facebook’s radical indifference, my term for the formal relationship between surveillance capitalists and their users. Facebook doesn’t care about disinformation, or mental health, or any of the other issues on Zuckerberg’s list of resolutions. Users are not customers, nor are they “the product.” They are merely free sources of raw material. Zuckerberg, Sandberg, and the company’s other top executives are not radically indifferent because they’re evil but because they’re surveillance capitalists, bound by unprecedented economic imperatives to extract behavioral data in order to predict our futures for others’ gain. Facebook does not care because it cannot care, so long as surveillance capitalism is allowed to flourish.

Facebook and other surveillance capitalists don’t want to harm you, but they gladly extract data from your pain. They don’t care if you’re happy, though they’re determined to fabricate the lucrative predictions that spring from your joy. It doesn’t matter what you do, as long as you do it in ways that they can transmute into profit. Once you understand this, it’s not hard to see that every Facebook action that triggers outrage is simply a predictable consequence of this economic perversion’s basic mechanisms.

Occasionally we catch an unobstructed view of radical indifference. In an internal Facebook memo from 2016, leaked last spring, Andrew Bosworth, one of Zuckerberg’s closest advisers, explained that “connection” needs to be understood as an economic imperative, whether it enhances users’ lives or threatens them. “We connect people,” he wrote. “Maybe someone finds love . . . Maybe someone dies in a terrorist attack coordinated on our tools. The ugly truth is that . . . anything that allows us to connect more people more often is de facto good.”

When we consider what is to be done about surveillance capitalism, we tend to rely on earlier efforts to deal with capitalism run amok. Yet this economic model has rooted and flourished during the past two decades despite existing paradigms for privacy law and antitrust. Precisely because this is unprecedented, we need novel remedies. Yes, Facebook should be regulated, starting with the enforcement of the 2011 FTC consent decree intended to oversee its privacy practices. But the threats of surveillance capitalism will not end there.

So let’s call Zuckerberg’s resolutions precisely what they are: features of an unprecedented and rogue capitalism. Then let’s develop the laws, regulatory framework, and new forms of collective action that will interrupt and outlaw these behavioral extraction and modification operations. The internet and the wider realm of digital technologies can be harnessed differently. This work begins with us.

Shoshana Zuboff is the author of The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power and a professor emerita at Harvard Business School. This is an original essay for Fast Company.