“Intelligence supposes goodwill,” Simone de Beauvoir wrote in the middle of the twentieth century. In the decades since, as we have entered a new era of technology risen from our minds yet not always consonant with our values, this question of goodwill has faded dangerously from the set of considerations around artificial intelligence and the alarming cult of increasingly advanced algorithms, shiny with technical triumph but dull with moral insensibility.
In De Beauvoir’s day, long before the birth of the Internet and the golden age of algorithms, the visionary mathematician, philosopher, and cybernetics pioneer Norbert Wiener (November 26, 1894–March 18, 1964) addressed these questions with astounding prescience in his 1954 book The Human Use of Human Beings, the ideas in which influenced the digital pioneers who shaped our present technological reality and have recently been rediscovered by a new generation of thinkers eager to reinstate the neglected moral dimension into the conversation about artificial intelligence and the future of technology.
A decade after The Human Use of Human Beings, Wiener expanded upon these ideas in a series of lectures at Yale and a philosophy seminar at Royaumont Abbey near Paris, which he reworked into the short, prophetic book God & Golem, Inc. (public library). Published by MIT Press in the final year of his life, it won him the posthumous National Book Award in the newly established category of Science, Philosophy, and Religion the following year.
With an eye to a future in which artificial intelligences begin making human intellectual and moral decisions — a notion lightyears ahead of its time in 1964 — Wiener writes:
It is relatively easy to promote good and to fight evil and good and evil are arranged against each other in two clear lines, and when those on the other side are our unquestioned enemies and those on our side our trusted allies. What, however, if we must ask, each time and in every situation, where is the friend and where is the enemy? What, moreover, when we have to put the decision in the hands of an inexorable magic or an inexorable machine of which we must ask the right questions in advance, without fully understanding the operations of the process by which they will be answered?
To ask the right questions, Wiener implies, requires not only a literacy in the language of the asking, both technological and ethical, but also an understanding of the myriad nuances that shade such considerations — subtleties challenging enough for human judgment in many situations and just about impossible to encode in a set of operative rules to be applied indiscriminately across a variety of contexts by pre-programmed machines. Half a century later, as variations on the Trolley problem cast these questions into sharp relief in considering the technology behind everything from self-driving cars to elder care AIs, Wiener’s words reverberate with wisdom both disquieting and consolatory. In a passage of sobering lucidity, which today’s overconfident makers of technologically potent yet morally impoverished algorithms would be well advised to heed, Wiener echoes Rachel Carson’s advice to the next generations and writes:
The future offers very little hope for those who expect that our new mechanical slaves will offer us a world in which we may rest from thinking. Help us they may, but at the cost of supreme demands upon our honesty and our intelligence. The world of the future will be an ever more demanding struggle against the limitations of our intelligence, not a comfortable hammock in which we can lie down to be waited upon by our robot slaves.
Complement with Nick Cave on music, feeling, and transcendence in the age of artificial intelligence — the most insightful and sensitive contemporary perspective on the paradoxes of AI I’ve encountered — then revisit the prescient, foundational questions of science and ethics Mary Shelley raised in Frankenstein more than a century before Wiener.