In the waning years of the 1800s, during the late Victorian era, some came to believe that human knowledge would soon be perfected. This notion was not entirely unjustified. Science had leapt great bounds in a few decades, medicine was finally being researched and practiced in line with evidence, industry had dramatically expanded productive capacity, mechanized transportation had opened the world. Humanity was on the move.
This progress and the confidence that attended it were perhaps most evident in physics; essential concepts in electromagnetism, thermodynamics, and mechanics had transformed the field. In a few short decades, the macroscopic workings of the physical world had been mapped with incredible precision, moving science from a quasi-mystical pursuit of vague eternal truths to a ruthless machine for making accurate predictions about everything that could be observed. This rapid progress had not gone unnoticed by those within the field. Though there were certainly many who felt otherwise, some giants within physics could see the completion of their project in the near distance. In an 1899 lecture the famed physicist Albert A. Michelson was confident enough to say that “the more important fundamental laws and facts of physical science have all been discovered, and these are now so firmly established that the possibility of their ever being supplanted in consequence of new discoveries is exceedingly remote.” Twenty years earlier, his German contemporary Johann Philipp Gustav von Jolly had told a young Max Planck “in this field, almost everything is already discovered, and all that remains is to fill a few unimportant holes.” However more cautious their peers may have been, this basic notion - that physics had more or less been solved by the late Victorian era - reflected a real current of opinion.
In biology, too, the development of evolution as a scientific theory was hailed as something like the completion of a major part of the human project; the notion of Darwinian ideas as the skeleton key to the science of life became widespread once evolution was generally accepted as true. Of course there was always more work to be done, but this was another broad scientific concept that felt less like an iterative step in a long march and more like reaching the summit of a mountain. Even better, while Charles Darwin’s On the Origin of Species had not commented on human social issues, his theories were quickly drafted into that effort. “Many Victorians recognised in evolutionary thinking a vision of the world that seemed to fit their own social experience,” the scholar of the Victorian era Carolyn Burdett has written. “Darwin’s theory of biological evolution was a powerful way to describe Britain’s competitive capitalist economy in which some people became enormously wealthy and others struggled amidst the direst poverty.” Thus advances in science increasingly served as a moral backstop for the sprawling inequality of Victorian society.
By the end of the Victorian era the philosopher and polymath Bertrand Russell was busily mapping the most basic elements of mathematics and tying them to formal logic. The possibility of digital computing had been demonstrated, as electrical switching circuits had been used to express basic logical operations. If some fields were described as essentially solved, and others as in their infancy, both suggested that humanity had taken some sort of epochal step forward, that we were leaving past ways of life behind.
This sense that the human race had advanced to the precipice of a new world was not merely the purview of scientists. Poetry signaled in the direction of a new era as well, though as is common with poets, they were more ambivalent about it. On the optimistic side, in “The Old Astronomer to His Pupil,” Sarah Williams imagined a meeting with the 16th-century astronomer Tyco Brahe, writing “He may know the law of all things, yet be ignorant of how/We are working to completion, working on from then to now.” She writes confidently that her soul will in time “rise in perfect light,” much like the progress of science. Alfred Lord Tennyson, the consummate Victorian poet, was more gloomy. He had predicted the utopian outlook in his early work “Locksley Hall,” published in the first five years of Queen Victoria’s reign. The poem finds a lovelorn man searching for a cure for his sadness and imagining the next stage of human evolution. He feels resigned to inhabit a world on the brink of epochal change. He describes this evolution in saying “the individual withers, and the world is more and more.” This gloomy line is certainly pessimistic, but as the editors of Poetry Magazine once wrote, it reflects “that expanding, stifling ‘world’ [which] saw innumerable advances in the natural and social sciences.” The individual was withering because the imposition of progress had grown more and more inescapable.
The late Victorian period also saw the British empire nearing its zenith, and the British empire thought of itself as coterminous with civilization; for the people empowered to define progress, back then, progress was naturally white, and specifically British. The subjugation of so much of the world, the fact that the sun never set on the British Empire, was perceived to be a civilizing tendency that would bring the backwards countries of the world within the penumbra of all of this human advancement. The community of nations itself seemed to be bent towards the Victorian insistence on its own triumph, never mind that only one nation was doing the bending. Queen Victoria’s diamond jubilee in 1897 provoked long lists of the accomplishments of the empire, including the steady path toward democracy that would leave her successors as little more than figureheads.
And then the 20th century happened.
Those familiar with the history of science will note the irony of Michelson’s confident statement. It was the Michelson-Morley experiment, undertaken with peer Edward Morley, that famously demonstrated the non-existence of luminiferous aether, the substance through which light waves supposedly traveled. That non-existence revealed a hole in the heart of physics; among other things, light’s ability to travel without a medium helped demonstrate its dual identity as a particle and a wave, a status once unthinkable. And it was the mysteries posed by that experiment that led Albert Einstein to special relativity, which when revealed in 1905 undermined those fundamental laws and facts that Michelson had so recently seen as unshakable. The Newtonian physics that had governed our understanding of mechanics still produced accurate results, in the macro world, but our understanding of the underlying geometry of the universe they existed in was changed forever. It turned out that what some had thought to be the culmination of physics was the last gasp of a terminally-ill paradigm.
This was, ultimately, a happy ending, a story of scientific progress. But I’m afraid the broader history of what followed Victorian triumphalism is not so cheery. In the first half-century of the 20th century, humanity endured two world wars, both of which leveraged technological advances to produce unthinkable amounts of bloodshed; the efforts of the Nazis to exterminate Jews and other undesirables, justified through a grim parody of Darwin’s theories; a pandemic of unprecedented scale, which spread so far and fast in part because of a world that grew smaller by the day; the Great Depression, made possible in part through new developments in financial machinery and new frontiers of greed; and the divvying up of the world into two antagonistic nuclear-armed blocs, each founded on moral philosophies that their adherents saw as impregnable and each very willing to slaughter innocents for a modicum of influence.
In every case, the world of ideas that had so recently seemed to promise a utopian age had instead been complicit in unthinkable suffering. Nazism, famously, arose in a country that some in the late Victorian era would have said was the world’s most advanced, and Adolf Hitler’s regime was buttressed not just by eugenics and a fraudulent history of the Jews but by poems and symphonies, the art of the Volkisch as well as the science of the V2 rocket. Intellectual progress had led to death in ways both intentional and not; you might find it hard to blame the advance of progress for the Spanish flu, but then the rise of intercontinental railroads and ships powered by fossil fuels was key to its spread.
Amidst all this chaos, the world of the mind splintered too. The modern period (as in the period of Modernism, not the contemporary era) is famously associated with a collapse of meaning and the demise of truths once thought to be certain. In the visual arts, painters and sculptors ran from the direct depiction of visual reality, where the Impressionists and Expressionists who preceded them had only gingerly leaned away; the result was artists like Mark Rothko and Jackson Pollock, notorious for leaving viewers befuddled. In literature, writers like Virginia Woolf and James Joyce cheerfully broke the rules internal to their own novels whenever it pleased them or, at an extreme as in Finnegan’s Wake, never bothered to establish them in the first place. Philosophers like Ludwig Wittgenstein dutifully eviscerated the attempts of the previous generation, like those of Russell, to force the world into a structure that was convenient for human uses; in mathematics, Kurt Gödel did the same. The nostrums of common sense were dying in droves, and nobody was saying that the world had been figured out anymore.
Meanwhile, in physics, quantum mechanics would become famous for its dogged refusal to conform to our intuitive understandings of how the universe worked, derived from the experience of living in the macroscopic world. Advanced physics had long been hard to comprehend, but now its high priests were saying openly that it defied understanding. “Those who are not shocked when they first come across quantum theory cannot possibly have understood it,” said quantum mechanics titan Niels Bohr. He was quoted as such in a book by Werner Heisenberg, whose uncertainty principle had established profound limitations on what was knowable, in the most elementary foundations of matter. Worse still, Einstein’s 1915 theory of general relativity proved to offer remarkable predictive power when it came to gravity – and could not be reconciled with quantum mechanics, which was also proving to produce unparalleled accuracy in its own predictions about celestial mechanics. They remain unreconciled to this day. Michelson and von Jolly had misapprehended their moment as a completion of the project of physics, a few decades before their descendants would identify a fissure within it that no scientist has been able to suture.
And out in the broader world, all of us would come to live in the shadow of the Holocaust.
I’m telling you all of this to establish a principle: don’t trust people who believe that they have arrived at the end of time, or at its culmination, or that they exist outside of it. Never believe anyone who thinks that they are looking at the world from outside of the world, at history from outside of history.
It would be an exaggeration to suggest that the scientists and engineers behind the seminal 1956 Dartmouth conference on artificial intelligence believed that they could quickly create a virtual mind with human-like characteristics, but not as much of an exaggeration as you’d think. The drafters of the paper that called for convening the conference did believe that a “two-month, ten-man study” could result in a “significant advance” in domains like problem-solving, abstract reasoning, language use, and self-improvement. Diving deeper into that document and researching the conference, you’ll be struck by the spirit of optimism and the confidence that these were ultimately mundane problems, if not easy ones. They really believed that AI was achievable in an era in which many computing tasks were still carried out by machines that used paper punch cards to store data.
To give you a sense of the state of computing at the time, that year saw the release of the influential Bendix G-15. One of the first personal computers, the Bendix used tape cartridges that could contain about 10 kilobytes of memory, or 1.5625 × 10-7 the space of the smallest-capacity iPhone on the market today. The mighty Manchester Atlas supercomputer would not become commercially available for six more years. On its release, in 1962, the first Atlas was said to house half of the computing power in the entire United Kingdom; its users enjoyed the equivalent of 96 kilobytes of RAM, or less than some lamps you can buy at IKEAs.
Needless to say, these issues ultimately required more computing power than was available in that era, to say nothing of time, money, and manpower. In the seven decades since that conference, the history of artificial intelligence has largely been one of false hopes and progress that seemed tantalizingly close but always skittered just out of the grasp of the computer scientists who pursued it. The old joke, which updated itself as the years dragged on, was that AI had been 10 years away for 20 years, then 30, then 40….
Ah, but now. I hardly need to introduce anyone to the notion that we’re witnessing an AI renaissance. The past year or so has seen the unveiling of a great number of powerful systems that have captured the public’s imagination – Open AI’s GPT-3 and GPT-4, and their ChatGPT offshoot; automated image generators like Dall-E and Midjourney; advanced web search engines like Microsoft’s new Bing; and sundry other systems whose ultimate uses are still unclear, such as Google’s Bard system. These remarkable feats of engineering have delighted millions of users and prompted a tidal wave of commentary that rivals the election of Donald Trump in sheer mass of opinion. I wouldn’t know where to begin to summarize this reaction, other than to say that as in the Victorian era almost everyone seems sure that something epochal has happened. Suffice it to say that Google CEO Sundar Pichai’s pronouncement that advances in artificial intelligence will prove more profound than the discovery of fire or electricity was not at all exceptional in the current atmosphere. Ross Douthat of the New York Times summarized the new thinking in saying that “the A.I. revolution represents a fundamental break with Enlightenment science,” which of course implies a fundamental break with the reality Enlightenment science describes. It appears that, in a certain sense, AI enthusiasts want their projections for AI to be both science and fiction.
In the background of this hype, there have been quiet murmurs that perhaps the moment is not so world-altering as most seem to think. A few have suggested that, maybe, the foundations of heaven remain unshaken. AI skepticism is about as old as the pursuit of AI. Critics like Douglas Hofstadter, Noam Chomsky, and Peter Kassan have stridently insisted for years that the approaches that most computer scientists were taking in the pursuit of AI were fundamentally flawed. These critics have tended to focus on the gap between what we know of beings that think, notably humans, what we know of how that thinking occurs, and the processes that underlie modern AI-like systems.
One major issue is that most or all of the major AI models developed today are based on the same essential approach, machine learning and “neural networks,” which are not similar to our own minds, which were built by evolution. From what we know, these are machine language systems that leverage the harvesting of impossible amounts of information to iteratively self-develop internal models that can extract answers to prompts that are statistically likely to satisfy those prompts. I say “from what we know” because the actual algorithms and processes that make these systems work are tightly guarded industry secrets. (OpenAI, it turns out, is not especially open.) But the best information suggests that they’re developed by mining unfathomably vast datasets, assessing that data through sets of parameters that are also bigger than I can imagine, and then algorithmically developing responses. They are not repositories of information; they are self-iterating response-generators that learn, in their own way, from repositories of information.
Crucially, the major competitors are (again, as far as we know) unsupervised models – they don’t require a human being to encode the data they take in, which makes them far more flexible and potentially more powerful than older systems. But what is returned, fundamentally, is not the product of a deliberate process of stepwise reasoning like a human might utilize but a vestige of trial and error, self-correction, and predictive response. This has consequences.
If you use Google’s MusicLM to generate music based on the prompt “upbeat techno,” you will indeed get music that sounds like upbeat techno. But what the system returns to you does not just sound like techno in the human understanding of a genre but sounds like all of techno – through some unfathomably complex process, it’s producing something like the aggregate or average of extant techno music, or at least an immense sample of it. This naturally satisfies most definitions of techno music. The trouble, among other things, is that no human being could ever listen to as much music as was likely fed into major music-producing AI systems, calling into question how alike this process is to human songwriting. Nor is it clear if something really new could ever be produced in this way. After all, true creativity begins precisely where influence ends.
The very fact that these models derive their outputs from huge datasets suggests that those outputs will always be derivative, middle-of-the-road, an average of averages. Personally, I find that conversation with ChatGPT is a remarkably polished and effective simulation of talking to the most boring person I’ve ever met. How could it be otherwise? When your models are basing their facsimiles of human creative production on more data than any human individual has ever processed in the history of the world, you’re ensuring that what’s returned feels generic. If I asked an aspiring filmmaker who their biggest influences were and they answered “every filmmaker who has ever lived,” I wouldn’t assume they were a budding auteur. I would assume that their work was lifeless and drab and unworthy of my time.
Part of the lurking issue here is the possibility that these systems, as capable as they are, might prove immensely powerful up to a certain point, and then suddenly hit a hard stop, a limit on what this kind of technology can do. The AI giant Peter Norvig, who used to serve as a research director for Google, suggested in a popular AI textbook that progress in this field can often be asymptotic – a given project might proceed busily in the right direction but ultimately prove unable to close the gap towards true success. These systems have been made more useful and impressive by throwing more data and more parameters at them. Whether generational leaps can be made without an accompanying leap in cognitive science remains to be seen.
Core to complaints about the claim that these systems constitute human-like artificial intelligence is the fact that human minds operate on far smaller amounts of information. The human mind is not “a lumbering statistical engine for pattern matching, gorging on hundreds of terabytes of data and extrapolating the most likely conversational response or most probable answer to a scientific question,” as Chomsky, Ian Roberts, and Jeffrey Watumull argued earlier this year. The mind is rule-bound, and those rules are present before we are old enough to have assembled a great amount of data. Indeed, this observation, “the poverty of the stimulus” – that the information a young child has been exposed to cannot explain that child’s cognitive capabilities – is one of the foundational tenets of modern linguistics. A two-year-old can walk down a street with far greater understanding of the immediate environment than an autodriving Tesla, without billions of dollars spent, teams of engineers, and reams of training data.
In Nicaragua, in the 1980s, a few hundred deaf children in government schools developed Nicaraguan sign language. Against the will of the adults who supervised them, they created a new language, despite the fact that they were all linguistically deprived, most came from poor backgrounds, and some had developmental and cognitive disabilities. A human grammar is an impossibly complex system, to the point that one could argue that we’ve never fully mapped any. And yet these children spontaneously generated a functioning human grammar. That is the power of the human brain, and it’s that power that AI advocates routinely dismiss - that they have to dismiss, are bent on dismissing. To acknowledge that power would make them seem less godlike, which appears to me to be the point of all of this.
The broader question is whether anything but an organic brain can think like an organic brain does. Our continuing ignorance regarding even basic questions of cognition hampers this debate. Sometimes this ignorance is leveraged against strong AI claims, but sometimes in favor; we can’t really be sure that machine learning systems don’t think the same way as human minds because we don’t know how human minds think. But it’s worth noting why cognitive science has struggled for so many centuries to comprehend how thinking works: because thinking arose from almost 4 billion years of evolution. The iterative processes of natural selection have had 80 percent of the history of this planet to develop a system that can comprehend everything found in the world, including itself. There are 100 trillion synaptic connections in a human brain. Is it really that hard to believe that we might not have duplicated its capabilities in 70 years of trying, in an entirely different material form?
“The attendees at the 1956 Dartmouth conference shared a common defining belief, namely that the act of thinking is not something unique either to humans or indeed even biological beings,” Jørgen Veisdal of the Norwegian University of Science and Technology has written. “Rather, they believed that computation is a formally deducible phenomenon which can be understood in a scientific way and that the best nonhuman instrument for doing so is the digital computer.” Thus the most essential and axiomatic belief in artificial intelligence, and potentially the most wrongheaded, was baked into the field from its very inception.
I will happily say: these new tools are remarkable achievements. When matched to the right task, they have the potential to be immensely useful, transformative. As many have said, there is the possibility that they could render many jobs obsolete, and perhaps lead to the creation of new ones. They’re also fun to play with. That they’re powerful technologies is not in question. What is worth questioning is why all of that praise is not sufficient, why the response to this new moment in AI has proven to be so overheated. These tools are triumphs of engineering; they are ordinary human tools, but potentially very effective ones. Why do so many find that unsatisfying? Why do they demand more?
They’re also, of course, triumphs of commerce. As I suggested above, Pichai’s grasping for the most oversaturated comparison he could find, to demonstrate the gravity of the present moment, was not unusual at all; the internet is now wallpapered with similar rhetoric. I understand why Pichai would engage in it, given that he and his company have direct financial incentive to exaggerate what new machine learning tools can do. I also understand why Eliezer Yudkowsky, the stamping, fuming philosopher king of the “rationalist” movement, would say that “the most likely result of building a superhumanly smart AI, under anything remotely like the current circumstances, is that literally everyone on Earth will die.” Yudkowsky has leveraged impossibly overheated rhetoric about the consequences of artificial intelligence into internet celebrity, a career as a professional Cassandra for an extinction-level event that he can keep imagining further and further into the future. AI, for him, is an identity, and thus his AI politics are identity politics in exactly the conventional sense. There are others like him. And with them at least I can say that money is on the line.
But I’m not sure why so many people who aren’t similarly invested are such strident defenders of AI maximalism. In the last couple of years, a new kind of internet-enabled megafan has invaded online spaces, similar to your Taylor Swift stans or Elon Musk fanboys in their fanatical devotion. The AI fanboys are both triumphalist and resentful, certain that these systems are everything they’ve been made out to be and more and eager to shout down those who suggest otherwise. It would be a mistake, though, to think that they’re celebrating ChatGPT or Midjourney or similar as such. They are, instead, celebrating the possibility of deliverance from their lives.
This is a newsletter about politics and culture. My readers and commenters are habituated to controversy and are able to keep level heads in debates about abortion, the war in Ukraine, LGBTQ rights, and other hot-button issues. Yet I’ve been consistently surprised by how many of them become viscerally unhappy when I question the meaning of recent developments in machine learning, large language models, and artificial intelligence. When I express skepticism about the consequences of this technology, a voluble slice of my readership does not just disagree. They’re wounded. There’s always a little wave of subscription cancellations. This is the condition that captivates me, not the technology as such.
Talk of AI has developed in two superficially-opposed but deeply complementary directions: utopianism and apocalypticism. AI will speed us to a world without hunger, want, and loneliness; AI will take control of the machines and (for some reason) order them to massacre its creators. Here I can trot out the old cliché that love and hate are not opposites but kissing cousins, that the true opposite of each is indifference. So too with AI debates: the war is not between those predicting deliverance and those predicting doom, but between both of those and the rest of us who would like to see developments in predictive text and image generation as interesting and powerful but ultimately ordinary technologies. Not ordinary as in unimportant or incapable of prompting serious economic change. But ordinary as in remaining within the category of human tool, like the smartphone, like the washing machine, like the broom. Not a technology that transcends technology and declares definitively that now is over.
That, I am convinced, lies at the heart of the AI debate – the tacit but intense desire to escape now. What both those predicting utopia and those predicting apocalypse are absolutely certain of is that the arrival of these systems, what they take to be the dawn of the AI era, means now is over. They are, above and beyond all things, millenarians. In common with all millenarians they yearn for a future in which some vast force sweeps away the ordinary and frees them from the dreadful accumulation of minutes that constitutes human life. The particular valence of whether AI will bring paradise or extermination is ultimately irrelevant; each is a species of escapism, a grasping around for a parachute. Thus the most interesting questions in the study of AI in the 21st century are not matters of technology or cognitive science or economics, but of eschatology.
There’s a certain species of doomer environmentalist who talks about their dread of what climate change will do to us in such a way that you cannot possibly miss the intensity of their longing for that exact outcome. I believe that many who are passionate about artificial intelligence are so invested because they believe that AI will rescue them from the disappointment and mundanity of human life. That it will free them from the ordinary. One way or the other.
American life is filled with millenarianism, if you look. You can find it in all manner of places in our politics. Climate change is the most obvious; those doomers I spoke of, for example, tend to evince resentment when it’s pointed out that we’re gradually making progress towards our climate goals, albeit not nearly fast enough. To suggest that climate change might not be stopped but rather attenuated sufficiently that capitalism and jobs and the rent will continue to exist is to rob people of a certain plausible story about how they might leave those things behind. There are climate hawks and climate activists and climate alarmists, and then there are climate millenarians. Like the adherents of Aum Shinrikyo, the Japanese cult that unleashed sarin gas on the Tokyo subway, the climate millenarian believes the world will end in fire; they’re just usually circumspect enough not to call that fire purifying.
Of course, Covid has played into the same desires. The Covid maximalist who still stays at home in 2023, life encased in shrink wrap, is to some extent a person who just never wanted to go back into the world anyway, someone who hopes to never have to emerge blinking into the light of normalcy.
There are many different ways to hide from the world. Recently a particular, particularly bleak application of the new AI has attracted a lot of attention. Replika, an AI chatbot app available on smartphones, combines “a sophisticated neural network machine learning model and scripted dialogue content” to present users with a simulation of interpersonal relationship. Replika has become the app of choice for a certain kind of lonely person, I’m willing to guess mostly lonely dudes. (In the Google Play store, the app is listed as Replika: My AI Friend.) A brief tour of forums dedicated to Replika demonstrates a passionate, exquisitely sensitive subculture of people who firmly believe that apparitions expressed in code that live in servers and their cellphones are their romantic partners. In March of this year, the creators of Replika stunned and angered their users by disabling the ability to have explicit sexual interactions with their “AI friends.” I found this all to be a little on the nose, a little too perfect of a demonstration of how people are going to respond to these new technologies – by crawling deeper into their digital caves.
Replika and the people who love it might simply seem to be a sad curio, but I think in fact it points to the fundamental problem with this entire enterprise: it seeks to avoid the human. Replika is a good lens in that way. In recent years we have been forced as a society to grapple with the loneliness and alienation of low-status men, those who may be well-educated and high earners but who are not perceived to have sexual or romantic value. From bronies to the alt-right to incels, our vocabulary and understanding have evolved, but the last decade or so has made it impossible for us to ignore the endlessly-growing snowball of resentment and self-hatred that emanates from sad lonely young men online. Several acts of mass violence have seen to that. There’s a group of people, they have found each other in the darker corners of the internet, and while the vast majority of them are likely decent enough and just trying to work through understandable pain at feeling rejected, a small number of them seem dedicated to getting revenge on society for their plight. In that they follow in the footsteps of their martyr, Eliot Rodger, who in hindsight was a potent symbol of the 21st century.
So the question, for all of those who think that AI is here and nothing will ever be the same, is how will AI solve this problem? It’s not a minor or irrelevant challenge. The incel issue is so sticky because there are no plausible policy fixes, despite dreams of government-assigned girlfriends, and there are no plausible policy fixes because loneliness lives deep in the bestial part of the human animal. Your desperation to connect with other people, in all the ways there are to connect, is part of human evolution’s endowment to you, and I’m afraid you don’t get to refuse that gift. And now modern economic and sociological conditions have weakened the hand of superficially-undesirable men relative to women, leading to vast gender disparities in relationship rates among young people, while modern technology has made those men constantly aware of other people’s sexual and romantic fulfillment. For centuries rigid social structures compelled women to partner with men they might not have freely chosen. The erosion of those norms represents genuine social progress and has also provoked a slowly developing crisis.
Against that, the AI future of our dreams can offer, well, Replika. Or ChatGPT, or any number of other programs that will talk to you in ways that their training sets suggest are likely to satisfy your needs. The more voluble partisans of this new era casually talk about the death of loneliness, suggesting that AI companions will simply erase one of the most persistent and sorrowful elements of human life with a little sprinkling of code. I don’t think it’s going to work. I think, instead, that as has often been the case in history, extending to the lower classes a pale imitation of their desires will ultimately only make them more passionately covet the real thing. These Potemkin girlfriends might capture the attention of lonely men, but they will simultaneously stoke their resentment that higher-status men get what they can’t have. The closer these systems come to simulating real human affection, the more obvious the differences will be, an uncanny valley of human endearment. In laboratory experiments baby monkeys will cling to mother figures made of wire and cloth. Human babies can’t.
Besides, lonely men don’t just want to text their girlfriends, they want to fuck them. And even the most optimistic proponents of new technologies don’t pretend that robotics and cybernetics have advanced sufficiently to create a facsimile of human life that can walk and talk and kiss. Such a thing is not coming for a long time. I’m sure there are men out there with both realistic sex dolls and Replika, but I doubt very much that the combination leaves them happy.
The point is not that the inability of artificial intelligence to solve the world’s romantic problems disqualifies the technology. The point is that the current surge of interest in AI is part of a decades-long misalignment between problems and solutions. I focus on the incel problem because it illustrates how false the promise of an AI revolution is, and why. The bitter irony of the digital era has been that technologies that bring us communicatively closer have increased rather than decreased feelings of alienation and social breakdown. It’s hard to imagine how AI does anything other than deepen that condition. The culture of our technology companies, as well as their public relations, have conditioned us to see Silicon Valley as a solution factory for fundamental human problems, rather than an important but inherently limited industry. But there are no technological solutions to social problems.
Again, this conversation is made necessary by the absurdly overheated rhetoric of the moment, a moment when the Federal Trade Commission sees fit to advise AI companies to tone down their marketing. This hysteria deepens longstanding problems. For a quarter decade we’ve been promised a technological solution to our educational problems, and for a quarter century ed tech has failed. But these failures never lead to the obvious conclusion; the boundaries are always pushed forward, the insistence that the technology just needs to be further developed. What these claims fail to understand is that the problems in human-computer interactions in education lie with the humans, not the computers, and thus can never be solved on the technology side. Indeed, this misunderstanding is the central folly of our age, the failure to understand that on the other side of every screen is a human being capable of frailty and folly a computer could never understand.
Silicon Valley types like to say that “bits are easy, atoms are hard,” meaning that it’s much easier to solve a problem of code than it is a problem in a material world. Well: no matter what happens from here on out, no matter how great the technological advances, we will live in a world of atoms, not of bits. Our problems are made of atoms and so are the frail forms we shuffle around in. You trudge to your workplace (made of atoms) in boredom and sadness every day because your apartment (made of atoms) costs money and your stomach (made of atoms) wants food (made of atoms). When your boyfriend decided to break up with you for another woman he did so because he craved the atoms that made up her body, and when you cried the tears were made of atoms. And the cancer cells that kill moms and dogs are made up of atoms.
But AI might cure cancer! Who knows? Who knows. Extravagant claims like that unspool around us in an environment of zero intellectual accountability, but sure, that would be great. And then people would die of other causes, taken by heart attacks or car accidents or mass shootings, too many holes in the dyke for AI to fill. The world would be better without cancer, if that were a remotely adult thing to hope for right now. But even then it would be a world in which each and every one of us is bound to die, based on a contract we never willingly signed, enforced on us by the dictates of biology.
Is it unfair to point out that artificial intelligence won’t end death, that I’m holding this technology to an impossible standard? I hardly have to tell you that this standard isn’t mine at all. A charming, modest headline: “AI Can Now Make You Immortal – But Should It?” The most florid of the contemporary fantasies of deliverance through technology imagine that we will soon upload our consciousness to computers and thus live forever, outfoxing our oldest adversary, death. Never mind that we don’t know what consciousness is or how it functions, never mind that the idea of a mind existing independent of its brain is a collapse back into the folk religion of mind-body dualism, never mind that the facsimiles we might upload would have no way to know (or “know”) if they’re actually anything like the real thing. We spend our lives in fear of death. Tech companies looking forward to their IPOs are telling us that we can avoid it. Who are we to argue?
I’m an atheist for several reasons, but the most important is that religion is just too convenient. Our lives feel random and devoid of purpose; God is here to bestow that purpose. We miss those who have died terribly; we will meet them again in the hereafter. We dread the inevitability of nonexistence; religion, always eager to please, reassures us that we will have eternal life. The church of AI, gathering converts by the day, makes just the same kind of promises. I decline.
You might think that this all breaks to the benefit of the AI doomsayers, that whatever does not buttress the case for AI paradise advances the cause of AI Armageddon. But, again – in the most important sense, these are the same. The apocalypse, like the rapture, frees you from the obligation to finish writing those work emails. Yudkowsky talks about the coming dark days with explicit warning but implicit longing; these stories the AI doomers tell are fanciful, romantic. The terminally boring “zombie apocalypse” cliché that pop culture forces on us demonstrates that many people would like nothing more than to experience the death of the world. We have an entire wing of our cultural industry devoted to the end of days. (I’m partial to some of it myself.) Many of us imagine ourselves to be the ones who’ll make it, to be the sole survivors, and those of us who don’t perhaps prefer to imagine the hard stop to everything, the quiet of destruction, Umberto Eco’s “silent desert where diversity is never seen.”
Everything that AI doomers say that artificial intelligence will do is something that human beings could attempt to do now. They say AI will launch the nukes, but the nukes have been sitting in siloes for decades, and no human has penetrated the walls of circuitry and humanity that guard them. They say AI will craft deadly viruses, despite the fact that gain-of-function research involves many processes that have never been automated, and that these viruses will devastate humanity, despite the fact that the immense Covid-19 pandemic has not killed even a single percentage point of the human population. They say that AI will take control of the robot army we will supposedly build, apparently and senselessly with no failsafes at all, despite the fact that even the most advanced robots extant will frequently be foiled by minor objects in their path and we can’t even build reliable self-driving cars. They say that we will see a rise of the machines, like in Stephen King’s Maximum Overdrive, so that perhaps you will one day be killed by an aggressive juicer, despite the fact that these are children’s stories, told for children.
The asteroid did not finish the dinosaurs. The number of steps necessary for nuclear Armageddon are orders of magnitude fewer than those necessary for the AI apocalypse and yet somehow its day has not yet come. Bacteria have had untold millennia to finish off our species, but the natural selection that drives their evolution has driven ours too, and we thrive. I assure you, the human animal will survive the systems it theorized and tinkered with and dreamed on and developed, and which we today have absolute and unyielding dominion over by any rational metric imaginable.
The story of Blake Lemoine, the Google engineer who claimed that a chatbot his company had devised was sentient and was summarily fired for doing so, attracted a lot of hype and a good deal of worry. What many lost in the debate was the fact that Lemoine played no role in the development of the chatbot; he was merely a user, a highly impressionable one. In that, he was no different from all the rest of us, trying to sift through true and false in a public discussion rendered nearly incomprehensible by hype. And, like many of us, he appeared willing to risk it all on a fantasy, so desperate was he to believe that he had stepped outside of the world and with it outside of himself.
I am telling you: you will always live in a world where disappointment and boredom are the default state of adult life. I am telling you: you will fear death until it inevitably comes for you. I am telling you: you will have to take out the trash yourself tomorrow and next week and next month and next year, and if after that some type of trash removal robot becomes ubiquitous in American homes, then the time it saves you will in turn be applied to some other kind of quotidian task you hate. Yes, science and technology move busily along. Life gets better. Technology advances. Things change. But we live in a perpetual now, and there is no rescue from our pleasant but enervating lives, and no one is coming to save you, organic or silicon.
I believe that progress is good, which is convenient, since progress is inevitable. But capital-P Progress, the kind that people want to see as their deliverance from all of their unhappiness, the kind that some in the Victorian era thought they had achieved within their racist empire, ruled by a decaying family of aristocrats…. That is the spirit of the relentless, trampling advance that we’re embracing in our yearning for the age of artificial intelligence. It’s an inhuman, uncaring force, both clumsy and destructive, overthought and underdetermined, tied to a child’s ideology, motivated by stock prices and our fear of death. I think we’ve seen it before. And it leads to Auschwitz.
I basically agree with everything you said in this article. IF AI solves many of our societal problems, it's unlikely to do so within the lifetime of anyone currently living; IF AI kills us all, likewise. I completely agree that a lot of the hype expressed by heads of industry and commentators in the AI space has been irresponsible and unmoored from objectivity.
Having said all of that, a point of constructive criticism.
There's a good reason that Bulverism (https://en.wikipedia.org/wiki/Bulverism) is considered a logical fallacy. Are some people AI utopians because AI utopianism serves a psychological need, a desperate craving to escape the mundane drudgery of ordinary life? Maybe! But if we could read their minds and confirm beyond a shadow of a doubt that such a psychological urge played no role in how they arrived at their beliefs - that wouldn't move the needle on "the case for AI utopianism" one iota. Either it's true, or it's false - WHY people believe it bears no relationship to whether it's true or false.* The psychological need that espousing a particular belief fulfils, and the truth or falsity of said belief, are orthogonal, wholly uncorrelated.
I'm not saying that your assertion (that people believe in AI utopianism/apocalypticism in part because it scratches a psychological itch for them) is false. I'm saying that, even if it's true (and it *probably is* for many), it's IRRELEVANT, because a sufficiently motivated debator can ALWAYS come up with a just-so story to explain why the only reason their opponent believes X is because X fulfils some psychological need, perhaps a need that the opponent isn't even consciously aware of. E.g.:
Atheist: "The only reason you're religious is because death and the inherent meaningless of life terrifies you!"
Christian: "The only reason you're an atheist is because you want the freedom to act immorally, without fear of punishment in the afterlife!"
Evolutionist: "The only reason you're a creationist is because the idea that humans are made of matter and are not divine makes you uncomfortable!"
Creationist: "The only reason you're an evolutionist is because you can't tolerate the idea that the human body was designed to perform certain actions and not others!"**
If anyone with five minutes of introspection can come up with a just-so story as to why belief X fulfils some subconscious psychological need for the person espousing it (even for beliefs which are obviously, unambiguously, factually true), it's a fully general argument which can be deployed in any context by anyone with any axe to grind, and is hence meaningless. I think your arguments would be a lot more persuasive if you stayed focused on the object-level question of "Given the current state of the evidence, is AI utopianism/apocalypticism justified?" rather than writing so many words about the meta-level question of "What psychological need does the belief in AI utopianism/apocalypticism fulfil for those who espouse it?" Joe's (sub)conscious motivation for believing X has NOTHING TO DO with whether or not X is true.
*One data point: you yourself have expressed exasperation with self-described Marxists who have never read a word of Marx. It seems fair to say that their motivations for believing in "Marxism" as they understand it must be different from yours. The fact that such people exist hasn't caused you to conclude that Marxism is wrong, nor should it. Or as Daniel Dennett would say, "There's nothing I like less than bad arguments for a view I hold dear." Or indeed Orwell: "As with the Christian religion, the worst advertisement for Socialism is its adherents."
**I'm hoping I passed an intellectual Turing test and it isn't obvious which side of either debate I personally fall on.
This was a fantastic essay. I agree that it is a human failing underlying AI doomerism that we are impatient with the quotidian, and some of us wish that something exciting would happen--if not paradise, then let it be the apocalypse.
But I think two other, more positive human traits are involved too. One is our yen to be in the know, to be in on the secret, to delve deep and learn. So many AI doomers seem to view those of us who aren’t worried about those juicers or paperclip makers or what have you as naive and ignorant. And yet we’re the ones who notice that robots can be defeated by a wrinkle in the carpet, self-driving cars by a traffic cone, and advanced computer processors by a power surge. We regular people may not have the abstruse AI knowledge of Yudkowsky, Ezra Klein, and other AI doomers, but we can observe the actual world and draw our own much more sanguine conclusions.
A second positive human trait the AI doomers share is a protective impulse. I am old enough to have grown up when everyone (except, it seems, me) was afraid of nuclear war. I remember hushed conversations on the playground, when a boy (it was always a boy--and always a very kind boy) would tell us that Minnesota would be the first place the Russians would bomb, because Honeywell was headquartered in Minnesota. These well-intentioned boys would share ideas about what we could do to protect ourselves from the coming catastrophe.
AI doomers remind me so much of these kind, well-intentioned, misguided boys on the playground. They pick up what they think is hidden knowledge (e.g. Minnesota’s alleged vulnerability), they worry about it, and they share their worries with the rest of us blithely oblivious people, not just because they find our world too dull, but because their imaginations are running amok, and they want to quell their anxieties by sharing them.