In 1476, about two decades after the publication of the Gutenberg Bible, a merchant named William Caxton built Britain’s first printing press, in a building near Westminster Abbey. The following year, he used it to publish a book, one of the first ever mass-printed in English, called “The Dictes and Sayings of the Philosophers.” (The title was redundant: “dictes” and sayings are the same thing.) The book was a translation of a French anthology, which was a translation of a Latin anthology, which was a translation of a Spanish anthology, which was a translation of an Arabic anthology that had been transcribed from oral tradition in eleventh-century Egypt.
“The Dictes” was what classicists call a doxography—a chapter-by-chapter list of ancient thinkers and what they said, or what they were said to have said. The chapter on Socrates, for example, included a brief summary of his life and death, a few descriptive details (“When he spake he wagged his litil fynger”), and a recitation of his various opinions, including his opinion that philosophy should only be transmitted orally, not through books.
Many of the dictes were mystical aphorisms (“Thought is the myrrour of man, wherein he may beholde his beaute and his filth”), or alarmist diet tips (“Wyne is ennemye to the saule . . . and is like setting fyre to fyre”), or paeans to a deity who was made to sound blandly, anachronistically Christian. Pythagoras, it was reported, instructed his followers “to serve God.” Omitted was the fact that Pythagoras was a pagan who believed in reincarnation and occult numerology. Still, at least Pythagoras was a real person. Some of the other philosophers in “The Dictes,” such as Zalquinus and Tac, probably never existed at all.
As it turns out, the whole volume was shot through with what we would now call fake news. Caxton did not introduce these errors; they were there all along. According to the “Encyclopedia of Arabic Literature,” the Egyptian anthology on which all subsequent translations were based was “highly influential as a source of both information and style,” despite the fact that it was “almost entirely inaccurate, and the sayings themselves highly dubious.”
The standard story about mass printing is a story of linear, teleological progress. It goes like this: Before Johannes Gutenberg invented the printing press, books were precious objects, handwritten by scribes and available primarily in Latin. Common people, most of whom couldn’t afford books and wouldn’t have been able to read them anyway, were left vulnerable to exploitation by powerful gatekeepers—landed élites, oligarchs of church and state—who could use their monopoly on knowledge to repress the masses. After Gutenberg, books became widely available, setting off a cascade of salutary movements and innovations, including but not limited to the Reformation, the Enlightenment, the steam engine, journalism, modern literature, modern medicine, and modern democracy.
This story isn’t entirely wrong, but it leaves out a lot. For one thing, Gutenberg wasn’t the first to use movable type—a Chinese artisan named Bi Sheng had developed his own process, using clay and paper ash, three and a half centuries before Gutenberg was born. For another, information wants to be free, but so does misinformation. The printing press empowered reformers; it also empowered hucksters, war profiteers, terrorists, and bigots. Nor did the printing press eliminate the problem of gatekeepers. It merely shifted the problem. The old gatekeepers were princes and priests. The new ones were entrepreneurs like Gutenberg and Caxton, or anyone who had enough money to gain access to their powerful technology.
From the beginning, Caxton was ambivalent about his status as a gatekeeper. He seemed uneasy even acknowledging his power. In an epilogue, Caxton wrote that, after receiving an English translation of the French version of “The Dictes,” he read the manuscript and “found nothing discordant therein”—well, except for one thing. “In the dictes and sayings of Socrates,” he wrote, the translator “hath left out certain and divers conclusions touching women.” In previous versions, the chapter on Socrates had included a sudden digression into petty misogyny. (“He saw a woman sick, of whom he said that the evil resteth and dwelleth with the evil.” And “he saw a young maid that learned to write, of whom he said that men multiplied evil upon evil.”) In the English translation, the digression was gone.
Should Caxton overrule the translator and restore the original text? Or should he let the censorship stand, implying that, even if such insults were acceptable in ancient Athens or medieval Cairo, they were now beyond the pale? After many sentences of ornate hand-wringing, he tried to have it both ways. He translated the misogynistic passage into English and reproduced it in full. But, instead of placing it in its original context, in the Socrates chapter, he put it in the middle of his epilogue, as if to quarantine it.
As soon as he made his decision, he attempted to rationalize it. In the rest of the epilogue, he seemed to imply that he wasn’t a gatekeeper after all—that, although he was clearly a publisher, his printing press should be treated more like a platform. He was merely serving his customers, he suggested: they deserved to hear all perspectives and make up their own minds. Besides, anyone who was offended should blame Socrates, not Caxton. Better yet, a reader who disliked the passage could “with a pen race it out, or else rend the leaf out of the book.”
In the twentieth century, as early packet-switching networks evolved into the Internet, a generation of futurists and TED talkers emerged, explaining the new system to the laity in a spirit of wide-eyed techno-utopianism. They compared it to a superhighway, to a marketplace of ideas, to a printing press. Anyone who was spending a lot of time on the Internet knew that many parts of it felt more like a dingy flea market, or like a parking lot outside a bar the moment before a fight breaks out. The techno-utopians must have been aware of those parts, too, but they didn’t mention them very often.
In 1998, James Dewar, a policy analyst at the RAND Corporation, wrote a paper called “The Information Age and the Printing Press: Looking Backward to See Ahead.” He had a rosy view of the past, and he extrapolated this into hopeful speculation about the future. “The printing press has been implicated in the Reformation, the Renaissance and the Scientific Revolution,” he wrote. “Similarly profound changes may already be underway in the information age.” Near the end of the paper, he acknowledged that “we are already seeing some of the dark side of networked computers.” He listed a few examples, in bullet-point form: “New and interesting ways of breaking into computer systems”; “Chain letters (that are both illegal and bandwidth intensive)”; “ ‘Trollers’ are posting to newsgroups.” Yet this brief qualification, which appeared in a section called “Afterthoughts,” seemed perfunctory at best. It had no effect on Dewar’s sweeping, optimistic conclusion: that it was “more important to explore the upside of the technology than to protect against the downside,” and, thus, “the Internet should remain unregulated.”
In the ensuing decade, a few nerdy young men created a handful of fast-growing social networks—Myspace, Twitter, Reddit, Facebook. They didn’t pretend to know exactly how social media would be used, and they gave even less thought to how it might be misused. Despite Caxton’s self-justifications, subsequent generations of printers had grown to understand themselves as gatekeepers, and publishing had become an industry defined as much by what it didn’t publish as by what it did. In the new industry of social media, the default setting was reversed. Founders vowed to keep their platforms “content-neutral.” The assumption was that almost all voices, even odious ones, deserved the chance to be amplified.
Steve Huffman, the co-founder and C.E.O. of Reddit, told me that in high school he learned “about Gutenberg, Martin Luther—the democratization of knowledge and power. It was deeply ingrained in me that freedom of expression is important.” Naturally, he moderated the speech on Reddit as minimally as possible. “In the early days, we considered ourselves the anti-gatekeepers—the liberators,” Huffman told me. “Everyone’s feeling, including mine at the time, was, Trust your users. Let them post what they want to post. If it’s bad, it’ll get down-voted.”
For centuries, the meaning of free speech had been refined and reinterpreted in universities, in legislatures, in the courts, in the press. In the early days of Silicon Valley, however, weighty decisions about free speech were more likely to be made in the course of an afternoon, in a cramped conference room full of complimentary snacks, by a small team of harried computer engineers. Often, they had no long-term plan other than hacking together a “minimum viable product,” “shipping” their code as quickly as possible, and then “iterating”—startup euphemisms for what was, essentially, trial and error. “I remember thinking, People in government, on the Supreme Court, are way smarter than me,” Huffman said. “So, if something’s not illegal to say under U.S. law, why should I make it illegal to say on Reddit?”
In 2012, shortly before Facebook went public, one of its S.E.C. filings included an open letter signed by Mark Zuckerberg, the company’s founder and C.E.O. Since Facebook’s launch, in 2004, Zuckerberg had portrayed himself as a Robin Hood figure, snatching power from the gatekeepers and redistributing it to the people. In the letter, he claimed that, around Facebook’s open-plan office, “we often talk about inventions like the printing press and the television—by simply making communication more efficient, they led to a complete transformation of many important parts of society. . . . They encouraged progress. They changed the way society was organized. They brought us closer together.” This wasn’t entirely wrong, but it left out a lot. Still, in fairness to Zuckerberg, he was merely echoing what has long been the dominant narrative about the history of technology—the triumphalist one.
Until recently, Zuckerberg insisted that Facebook was a platform, not a publisher. If some disgruntled teen-ager wanted to quote Socrates’ vituperative opinions about women—or if, for that matter, a teen-ager wanted to share his own vituperative opinions—then who was Zuckerberg to stand in the way? In 2016, a few hours after a private audience with the Pope, Zuckerberg hosted a public question-and-answer event in Rome. He was asked whether he saw himself as an editor. “No,” he said, tittering uncomfortably. “We build tools. We do not produce content.” In some settings, he tried to absolve himself of decision-making power. In others, he acknowledged his power but framed his actions as inherently noble, implying that the freedom to share opinions online was akin to a human right. Sometimes he deployed several dodges, one after another, in the tradition of William Caxton: information wants to be free; besides, people who take offense should blame the author, not the messenger; anyway, the ultimate responsibility lies with each individual reader.
Many early social-media entrepreneurs went to college to study computer science or business, receiving a respect for free-speech principles via cultural osmosis. Others didn’t finish college at all. One of the few who has read widely in the humanities is Chris Hughes, who was Mark Zuckerberg’s roommate at Harvard before becoming one of Facebook’s first employees. “There was a strong sense back then—certainly you heard it from Mark and the people around him—that wiring the world was good in and of itself,” Hughes said recently. “There was a widespread belief in the inevitable forward march of history. I don’t know that that came from books, or from anywhere in particular—I think it was just understood.” Most people in Silicon Valley wanted to “change the world.” They didn’t bother specifying that they wanted to change it for the better—that part was implied, and, besides, it was supposed to happen more or less automatically. “I remember a ton of conversations in which the introduction of our tools was compared to the advent of the hammer, or the light bulb,” Hughes went on. “We could have compared it to a weapon, too, I suppose, but nobody did.”
“The Printing Press as an Agent of Change,” published in 1979 by the historian Elizabeth Eisenstein, is the seminal account, more than seven hundred pages long, of how mass printing, in Francis Bacon’s phrase, “altered the face and state of the world.” Eisenstein is a thorough scholar, and she is dutiful about lodging the necessary caveats. She acknowledges that many early printers were driven, at least in part, by the profit motive, and that much of what they printed was disinformation or propaganda. Still, even when noting such drawbacks, she tends to couch them in a narrative of redemption. She argues, for example, that many “fraudulent esoteric writings” were, ultimately, “paving the way for a purification of Christian sources later on. Here as elsewhere there is a need to distinguish between initial and delayed effects.” She makes similar claims at other points in the book, downplaying initial effects in favor of taking the long view, even though the initial effects of the printing press included heightened ethnic tensions, the spread of medical misinformation, and about a century’s worth of European religious wars. In other words, even when early printing technology ought to be described as a weapon, Eisenstein treats it more like a light bulb.
“The advent of printing,” Eisenstein writes, “provided ‘the stroke of magic’ by which an obscure theologian in Wittenberg managed to shake Saint Peter’s throne.” The theologian, of course, was Martin Luther. Eisenstein recounts the viral dissemination of Luther’s Ninety-five Theses in some detail. Nowhere, however, does she mention one of Luther’s later works, a treatise called “On the Jews and Their Lies.” “We are at fault in not slaying them,” Luther writes. “I shall give you my sincere advice: first, to set fire to their synagogues or schools. . . . Second, I advise that their houses also be razed and destroyed. ” It goes on and on, with an avidity that was shocking even by the standards of the time. Luther’s swan song, published in the year of his death, was a pamphlet called “Warning Against the Jews.”
Luther was not content with verbal abuse, Paul Johnson writes, in “A History of the Jews.” “He got Jews expelled from Saxony in 1537, and in the 1540s he drove them from many German towns.” Johnson adds that Luther’s followers “sacked the Berlin synagogue in 1572 and the following year finally got their way” when the Jews were banned from the entire region. If mass printing was “the spark of magic” that helped Luther catalyze the Reformation, then it was also the megaphone that enabled anti-Semites to shout “Fire!” in the crowded theatre of Western Europe. According to Johnson, “On the Jews and Their Lies” was “the first work of modern anti-Semitism, and a giant step forward on the road to the Holocaust.”
William Caxton introduced more than a thousand words into the English language, including “concussion,” “voyager,” and “servitude.” Another word that didn’t exist at the time is “Islamophobia,” which can now be used, anachronistically, to describe Caxton’s geopolitical proclivities. In fact, such a description would be an understatement. In 1481, Caxton published “Godeffroy of Boloyne; or, The Siege and Conqueste of Jerusalem.” The book, he explained in an epilogue, had been “translated & reduced out of French into English by me, symple persone Wylliam Caxton, to the end that every Christian man may be better encouraged to enterprise warre for the defense of Christendom.” (He actually wrote “ffrensshe” and “tenterprise,” but some of his spellings have been standardized here for the sake of legibility.) Godeffroy was an eleventh-century crusader who led Christian armies through Nicaea and Constantinople, and into Jerusalem, slaughtering as many Muslims as he could find. In 1481, the Ottoman Empire was expanding toward Jerusalem. Caxton hoped that, by recounting romantic tales of the First Crusade, he could inspire his contemporaries “with strong hand to expelle the Saracens and Turks out of the same, that our Lord might be there served & worshipped of his chosen Christian people.” So much for content neutrality.
This is all a matter of recondite academic debate, until it isn’t. Let’s say it’s 2004 or 2005, and you’re about to start a social-media company. Bandwidth is cheap. Venture capital abounds. Legislators don’t understand your business model well enough to regulate it, and the public isn’t really paying attention. You can do pretty much whatever you want. So what do you do? If you believe wholeheartedly in the inevitable march of progress—if you have no doubt that any communication tool you bestow upon the masses will be used as a light bulb, not as a weapon—then there will be no countervailing force checking your reckless optimism, not to mention your rapacity. If, however, you take the downside risk more seriously—if it crosses your mind that your nifty new light bulb could, say, cause a few liberal democracies to lurch toward tyranny, exacerbate an already acute climate crisis, and heighten nuclear tensions—then you might proceed with a bit more caution.
In 2003, a fifteen-year-old named Christopher Poole created an image board called 4chan, which was built on the principles of anonymity and unrestrained free speech. It grew into a repository for some of the worst that the Internet had to offer: florid racism, violent pornography, screeds from the cohort of misogynists now known as “incels.” (Poole was later hired by Google.) When Poole started banning some of the most egregious 4chan posts—images that verged on child pornography, for example—many users saw this as a violation of their free-speech rights. One such user was Fredrick Brennan. In 2013, high on mushrooms, Brennan decided to create 8chan. He conceived it as an alternative to 4chan, one with an even stauncher commitment to anything-goes content moderation.
Pretty quickly, 8chan became like 4chan, only worse. (Recently, on Vice News, Brennan called 8chan “the butt hole of the Internet”; in an interview with me, he called it “horrifying and depressing.”) This year, three acts of white-supremacist terrorism—armed attacks on a synagogue in Poway, California; on two mosques in Christchurch, New Zealand; and on a Walmart in El Paso, Texas—have been committed by young men who said that they’d been radicalized on 8chan. “You build this thing with good intentions, believing that you’re about to change the world, and then you watch as the body count ticks up to—what is it now, seventy-one?” Brennan said. “It’s like a nightmare.”
Brennan, who lives in the Philippines, left 8chan in 2016. He now builds software for typeface designers. The site’s current owner, Jim Watkins, an American who also lives in the Philippines, has done essentially nothing to rein in the chaos. “Jim seems to think it’s all fun and games,” Brennan told me. In August, Watkins was subpoenaed by the House Committee on Homeland Security, and this month he flew to Washington to give closed-door testimony. In his prepared remarks, he wrote, “My company has no intention of deleting constitutionally protected hate speech.” Brennan told me, “If I could, I’d delete 8chan in a second. It’s way beyond the point of no return.”
In recent years, Steve Huffman, of Reddit, has modified his early approach to content moderation. “It’s one thing to go, ‘We should never ban anything,’ ” he said. “It’s another thing to watch a community use your platform for something that’s obviously harmful and think, O.K., can we actually justify doing nothing about this?”
In 2016, Reddit’s administrators banned a subreddit devoted to the Pizzagate conspiracy theory; in 2018, they banned a subreddit devoted to the QAnon conspiracy theory; in June, they censured The_Donald, a popular pro-Trump subreddit, after several of its members called for armed uprisings in progressive cities such as Portland, Oregon. Huffman told me, “I still think the spread of information through technology has been overwhelmingly positive. But I no longer believe, like I did fourteen years ago, that the negative side effects will disappear by themselves. What we’ve learned, sometimes the hard way, is that it takes a ton of work.”
After more than a decade, the most powerful social-media entrepreneurs, now businessmen in their thirties, finally seem to understand that their imagined techno-utopia is not going to materialize. This realization may be a sign of maturity; it may be a calculated response to internal pressure from investors or a strategy to stave off regulation; or it may be a simple defense mechanism, a reaction to being shamed. Within just a few years, the general public’s attitude toward social media has swerved from widespread veneration to viral fury. This may be one of the few silver linings of the 2016 election: had Hillary Clinton been elected, as most people expected, it’s unlikely that social-media founders would now have as much reason to reckon with what they’ve wrought.
In November, 2018, Mark Zuckerberg posted a note on his Facebook profile. “Many of us got into technology because we believe it can be a democratizing force for putting power in people’s hands,” he wrote. “I believe the world is better when more people have a voice to share their experiences, and when traditional gatekeepers like governments and media companies don’t control what ideas can be expressed.” He announced that he would set up “an independent body” to hear appeals from users who felt that their speech had been unfairly suppressed, or that they’d been insufficiently protected from harassment. The idea seemed to be that Zuckerberg—who once rejected the idea that fake news affected the 2016 election, and who has referred to Holocaust denialism as merely something “that different people get wrong”—should not be entrusted with sole gatekeeping authority over one of the most influential institutions on earth.
The independent board, now nicknamed “the Supreme Court of Facebook,” is expected to begin its work next year. Since it was announced, its brief has expanded: in addition to resolving conflicts, the board’s decisions will help Facebook develop a coherent approach to content moderation in general. Last week, Facebook released an eight-page charter outlining the board’s intentions, a detailed flowchart illustrating how it will make decisions, and a new message from Zuckerberg reaffirming his commitment (“The board’s decision will be binding, even if I or anyone at Facebook disagrees with it”). Kate Klonick, an Internet-law scholar and a professor at St. John’s University, has been allowed to observe more than a hundred hours of internal meetings at Facebook—meetings about what this new body should do, who should be on it, what powers it should have. She described the process as “manically thoughtful.” “It’s sort of like setting up Article III courts entirely from scratch,” she told me, referring to the part of the U.S. Constitution that enumerates the powers of the judicial branch. “Except, if you’re Facebook, you don’t have an Article III, because you don’t have a constitution. Which raises the question: Well, O.K., should we write a constitution? If so, what should be in it?” In the end, she continued, “it’s hard to know whether this will be adequate or not, but I can promise there has never been such an enormous concerted voluntary effort by a private company to jettison part of its power over a public right in human history.”
Zuckerberg hasn’t abandoned his techno-utopianism—his claim that the post-Facebook world “is better,” as he put it in his note in November, is arguable, at best—but his self-assurance has clearly been punctured. “The past two years have shown that without sufficient safeguards, people will misuse these tools to interfere in elections, spread misinformation, and incite violence,” he continued. “One of the most painful lessons I’ve learned is that when you connect two billion people, you will see all the beauty and ugliness of humanity.”As of last week, the note had received forty-one thousand likes, four thousand loves, eight hundred and fifty-six surprised emojis, two hundred and fifty-two laughing emojis, a hundred and fifty-eight angry emojis, eighty-one crying emojis, and more than seven thousand comments.
“Keep up the good work!” a stock trader in Michigan wrote. “Ignore the media, keep improving.”
“Please address the issue of fake ads,” a man in Benin wrote.
“Any concept, however sophisticated will somehow purposely be misused and abused,” a German woman wrote.
“You suck suckberg,” a British woman wrote.
“Let me guess,” a woman in Washington State wrote, “you guys never really thought of how explosive free speech really was did ya??” ♦