Note: I wrote this before news broke about Sam Altman’s ouster from OpenAI.
The industry to which I kind of, sort of belong is an odd beast. Everybody’s looking for a story, but the incentives and professional conditions often keeps them from seeing one that’s staring them in the face. What gets covered, and what doesn’t, has always confused me in this way. There appears to be so much low-hanging fruit that no one bothers to pick.
I have my own little list of pet stories I keep begging other people to write, and they never get written. (I am, myself, confined to this lovely little gilded cage.) Colleges and universities use GPA adjustments to try and account for grade inflation and inconsistent policies between high schools, this likely has immense consequences for applicants, surely there’s a large audience for an investigation; in many urban locales, efforts to stop development are not led by the rich white NIMBYs that YIMBYs love to hate, but by working class activists of color who fight in the name of opposing gentrification, there absolutely is a story there; charter school lottery policies are absurdly inconsistent and opaque not just from state to state but often within states and yet are the basis of both the supposed fairness of distribution of students and of a lot of research, there absolutely is a story there; is there no one with prestige and an audience who can examine this utterly bizarre scenario where David Foster Wallace and Infinite Jest are still fetishized as hate objects by a certain kind of anxious smart kid who doesn’t want to appear to be a different kind of anxious smart kid, 15 years after Wallace’s death? I’m sure there’s some more if I could think of it.
But what really blows my noodle is how rare AI skepticism still is in the media. One year ago, ChatGPT was opened to the public. The onslaught of overheated and careless rhetoric about our imminent ascent to a new plane of existence (or our imminent extermination) began then and has not slowed since. It’s inherent to the financial interests of journalism for professional media to sensationalize, after all. (This is a complaint that’s old enough that it was made by Charles Dickens, among many other journalists.) And so I’m not at all surprised that there’s been so many stories about how nothing will ever be the same, even as we’re all still just living busy little ordinary lives like we always have. What does surprise me is that there hasn’t really been a counterweight to all of that, writers looking at all the froth and seeing that there’s an unfilled need for some skepticism and restraint. It remains the case that the best bet about the future lies in something like the statement “these new ‘AI’ technologies aren’t really artificial intelligence and are unlikely ever to be, but they could have some interesting and moderately significant consequences.” But I see very very little of that. The Boston Globe has run a few skeptical pieces, including by me, and every once in a while I see a strong argument that we’re getting way ahead of ourselves. In general, though, there’s a remarkable dearth of restraint.
The New Yorker recently revealed its new “AI Issue.” There is nary an overall skeptical piece to be found. It just genuinely seems not to occur to the bigwigs at fancy media that there is a place for skepticism about the ultimate impact of this technology, even if (especially if!) they’re sure it will change everything. Here’s some copy in the email from the magazine announcing the issue, by editorial director Henry Finder.
A technology becomes an age—the automobile age, the Internet age—when it’s so pervasive that you can’t imagine life without it. Culturally speaking, there’s a before and after. For people of a certain generation, the Internet was once a rumor about bulletin boards and “Usenet”; now they arrive at a countryside bed and breakfast and ask for the Wi-Fi password with their room key. The new age arrives on no specific day; it creeps up slowly, and then pounces suddenly. And so, it seems, with A.I. For some years, it had been a silent partner in the most ordinary aspects of life, from smartphone pics to Netflix recommendations. But once it learned to converse—via ChatGPT, Bard, and the like—millions of people were startled into elation and alarm. Pygmalion had parted her lips.
Golly, Henry! Maybe you should Finder someone who can articulate the very real possibility that this is all going to look embarrassing a few years from now? Even if you aren’t, like me, one of those who questions how much actual impact the internet has had on our society in structural terms, it’s hard to understand why so few people feel compelled to play defense. Or maybe I do. Right now, AI hype gets clicks and attention, and since there’s not going to be any one definitive moment when all of this hype gets derailed, but rather a long slow embarrassed petering out, no one will ever be forced to confront their predictions that don’t come true.
Of course the New Yorker is not remotely alone in its attachment to ridiculously overheated rhetoric about AI. Perhaps my favorite is Elizabeth Weil’s laughable pick-me notion that Sam Altman of OpenAI is “the Oppenheimer of our age.” Right now, the combined nuclear arsenal of the world is capable of killing a significant percentage of our species, irradiating vast swaths of the earth for generations, and plunging the planet into nuclear winter. All of foreign policy and military strategy are filtered through the prism of nuclear weapons; without Russia’s immense nuclear capability, Vladimir Putin never even attempts to invade Ukraine, and with it, more and more people who formerly draped themselves in yellow and blue are quietly urging Zelensky to cut a deal. That’s how nukes tilt the playing field. The insights developed during the Manhattan Project contributed to an energy technology that should have revolutionized the world and still represents our best hope against climate change, if we only have the wisdom to use it. And Sam Altman is the same as Oppenheimer because… ChatGPT gives 8th graders the ability to generate dreadfully uninspired and error-filled text instead of producing it themselves? What? What? What?
That speaks to the most important point. The question that overwhelms me, and which our journalist class seems totally uninterested in, is simply to ask what AI can do now. Not what AI will do or should do or is projected to do, not an extrapolation or prediction, but a demonstration of something impressive that AI can do today. For it to be impressive, it has to do something that human beings can’t do themselves. I find ChatGPT and the various image generators fun but consistently underwhelming. For one thing, when you see some of their output on social media and it looks impressive, it’s a textbook case of survivorship bias. (They’re not posting all the other outputs that are garbled and useless.) But even were that not the case, you couldn’t point to ChatGPT or MidJourney or the like and call it a truly meaningful advance because there is nothing that they can produce that human beings have not or could not produce themselves. The text ChatGPT produces is not special. The images Dall-E produces are not special. They’re only considered special because a machine made them, which is of obviously limited social consequence. I’m aware that, for example, programmers are finding these tools very useful for faster and more efficient coding. And that’s cool! Could be quite meaningful. But that’s not revolution, it’s refinement. And that’s what we’ve had for the past 60 or so, various refinements after a hundred years of genuinely radical technological advancement and attendant social change.
Every time I ask people what AI can do now rather than in some indefinite future, it goes something like this - someone will say “AI is curing cancer!,” I’ll ask for evidence, they’ll send a link to a breathless story in Wired or Gizmodo or whatever, I’ll chase down a paper or press release to what they’re referring to, and it turns out that someone’s exploring how AI might someday be used in oncology diagnostics in such a way that some cancers might be caught earlier, maybe. Which could be good, definitely, but is also not happening now, and is not revolutionary change, especially given that we’ve learned in the past few decades that earlier detection does not necessarily increase the odds of survival. Even people who appear to be very well-informed about these issues tend to talk about “runaway AI” and “the singularity” with immense imprecision and a complete lack of appropriate skepticism. It leaves someone like me with nowhere to go; when you can just assert that a radically life-altering event is happening in the future, one which depends upon an immense number of shaky assumptions and which assumes that certain “emergent” leaps necessarily will happen because someone has imagined that it might, well, who can hold you accountable to any tangible reality? And I just don’t agree with the framing of a lot of this stuff. I think that eventually self-driving cars will be the norm, and that will be consequential for society. I look forward to it. But will it be as consequential as the switch from horse-powered transportation to the internal combustion engine, which happened barely more than 100 years ago? Not even close.
There are plenty of people in this industry who are smart critical thinkers and are perfectly capable of appropriate skepticism. Journalism and media lack for neither talent nor integrity. This is just one of those structural problems the news has; what’s the upside for a writer or publication that goes really hard on skepticism about this stuff? I have said that this current, ongoing wave of AI yearning fundamentally stems from the deeply human, deeply understandable desire to escape the mundane. We in the 21st have the fortune of living with the fruits of an immense leap forward in human living standards; we have the misfortune of living decades after that leap forward sputtered out, as we are all embedded in a technological culture that has made an immense number of refinements in recent years but which saw its real era of glory run from 1860ish to 1960ish. Yes, of course, I’d rather have 2020s cancer medicine rather than 1980s cancer medicine, and I recognize that advances in computing and information sciences (the only true leap forward in my lifetime) are impressive on their own terms. But I am more and more convinced that we’re all living in the Big Normal, a long period of more or less static technological conditions that ensure that basic social functioning will remain the same. And for many people, the Big Normal is terribly disappointing. Even if they’re not particularly unhappy people, they’re desperate to be freed from their ordinary life, and they’ll take that change in either utopian or apocalyptic form.
Current conditions are also very similar to why, for example, so many 30-something journos are afraid to criticize TikTok despite a lot of evidence that it’s corrosive to our young people. People really care about how they come across, especially professional opinion-havers and thought-makers. I’m no exception, obviously. A lot of writers don’t want to criticize TikTok, or the fruits of our helicopter parenting, or anything else associated with youth culture because they don’t want to look old. They’re terrified of getting hit with an “OK Boomer” or an “Old Man Yells at Cloud” jpeg so they avoid ever appearing to question the youngs. This has genuinely serious consequences for our national conversation on youth culture. I think something similar is happening with AI - people are just so afraid of looking foolish in the long run, of risking being the man who writes a long piece about how pandemics are a thing of the past in late 2019. But of course skepticism and restraint in this domain is of value even if it turns out that my own dubiousness turns out to be wrong. And I don’t think I’m wrong.
“Pygmalion had parted her lips.” Christ.
Well, there is this: https://everythingisbiology.substack.com/p/chatgpt-lobster-gizzards-and-intelligence
There was a time when journalists popped bubbles, discomfited the comfortable and spoke truth to power. Those days appear to be gone; journalism just supports whatever is popular.