What's Left for Tech?
perhaps AI hype is so intense because there are no other promising horizons
I gave a talk to a class at Northeastern University earlier this month, concerning technology, journalism, and the cultural professions. The students were bright and inquisitive, though they also reflected the current dynamic in higher ed overall - three quarters of the students who showed up were women, and the men who were there almost all sat moodily in the back and didn’t engage at all while their female peers took notes and asked questions. I know there’s a lot of criticism of the “crisis for boys” narrative, but it’s often hard not to believe in it.
At one point, I was giving my little spiel about how we’re actually living in a period of serious technological stagnation - that despite our vague assumption that we’re entitled to constant remarkable scientific progress, humanity has been living with real and valuable but decidedly small-scale technological growth for the past 50 or 60 or 70 years, after a hundred or so years of incredible growth from 1860ish to 1960ish, give or take a decade or two on either side. You’ve heard this from me before, and as before I will recommend Robert J. Gordon’s The Rise & Fall of American Growth for an exhaustive academic (and primarily economic) argument to this effect. Gordon persuasively demonstrates that from the mid-19th to mid-20th century, humanity leveraged several unique advancements that had remarkably outsized consequences for how we live and changed our basic existence in a way that never happened before and hasn’t since. Principal among these advances were the process of refining fossil fuels and using them to power all manner of devices and vehicles, the ability to harness electricity and use it to safely provide energy to homes (which practically speaking required the first development), and a revolution in medicine that came from the confluence of long-overdue acceptance of germ theory and basic hygienic principles, the discovery and refinement of antibiotics, and the modernization of vaccines.
Of course definitional issues are paramount here, and we can always debate what constitutes major or revolutionary change. Certainly the improvements in medical care in the past half-century feel very important to me as someone living now, and one saved life has immensely emotional and practical importance for many people. What’s more, advances in communication sciences and computer technology genuinely have been revolutionary; going from the Apple II to the iPhone in 30 years is remarkable. The complication that Gordon and other internet-skeptical researchers like Ha-Joon Chang have introduced is to question just how meaningful those digital technologies have been for a) economic growth and b) the daily experience of human life. It can be hard for people who stare at their phones all day to consider the possibility that digital technology just isn’t that important. But ask yourself: if you were forced to live either without your iPhone or without indoor plumbing, could you really choose the latter? I think a few weeks of pooping in the backyard and having no running water to wash your hands or take a shower would probably change your tune. And as impressive as some new development in medicine has been, there’s no question that in simple terms of reducing preventable deaths, the advances seen from 1900 to 1950 dwarf those seen since. To a remarkable extent, continued improvements in worldwide mortality in the past 75 years have been a matter of spreading existing treatments and practices to the developing world, rather than the result of new science.
ANYWAY. You’re probably bored of this line from me by now. But I was talking about this to these college kids, none of whom were alive in a world without widespread internet usage. We were talking about how companies market the future, particularly to people of their age group. I was making fun of the new iPhone and Apple’s marketing fixation on the fact that it’s TITANIUM. A few of the students pushed back; their old iPhones kept developing cracks in their casings, which TITANIUM would presumably fix. And, you know, if it works, that’s progress. (Only time and wear and tear will tell; the number of top-of-the-line phones I’ve gone through with fragile power ports leaves me rather cynical about such things.) Still, I tried to get the students to put that in context with the sense of promise and excitement of the recent past. I’m of a generation that was able to access the primitive internet in childhood but otherwise experienced the transition from the pre-internet world to now. I suspect this is all rather underwhelming for us. When you got your first smartphone, and you thought about what the future would hold, were your first thoughts about more durable casing? I doubt it. I know mine weren’t.
Why is Apple going so hard on TITANIUM? Well, where else does smartphone development have to go? In the early days there was this boundless optimism about what these things might someday do. The cameras, obviously, were a big point of emphasis, and they have developed to a remarkable degree, with even midrange phones now featuring very high-resolution sensors, often with multiple lenses. The addition of the ability to take video that was anything like high-quality, which became widespread a couple years into the smartphone era, was a big advantage. (There’s also all manner of “smart” filtering and adjustments now, which are of more subjective value.) The question is, who in 2023 ever says to themselves “smartphone cameras just aren’t good enough”? I’m sure the cameras will continue to get refined, forever. And maybe that marginal value will mean something, anything at all, in five or ten or twenty years. Maybe it won’t. But no one even pretends that it’s going to be a really big deal. Screens are going to get even more high-resolution, I guess, but again - is there a single person in the world who buys the latest flagship Samsung or iPhone and says, “Christ, I need a higher resolution screen”? They’ll get a little brighter. They’ll get a little more vivid. But so what? So what. Phones have gotten smaller and they’ve gotten bigger. Some gimmicks like built-in projectors were attempted and failed. Some advances like wireless charging have become mainstays. And the value of some things, like foldable screens, remains to be seen. But even the biggest partisans for that technology won’t try to convince you that it’s life-altering.
The processors will get faster. They’ll add more RAM. They’ll generally have more power. But for what? To run what? To do what? To run the games that we were once told would replace our PlayStation and Xbox games, but didn’t? It’s hard to imagine that a phone could feel snappier and more responsive than my current one. (Who exactly is compiling video on a smartphone?) More power will filter down to cheaper phones, over time, which is nice but similarly uninspiring. Of course, the most important development with “internals” will be for greater efficiency, and battery life is ultimately the most important element of any portable device. (No juice, no value.) But ultimately what’s required for real revolutionary change in battery life is some sort of leap in battery technology itself, and many have argued that battery improvements have proven to be the hardest to secure. Besides, we’ll need more battery to power even more useless pixels in our already uselessly high-resolution screens.
Smartphone development has been a good object lesson in the reality that cool ideas aren’t always practical or worthwhile. There have been several attempts to truly make one’s smartphone the “everything device,” such as with Samsung’s Dex technology, which lets you connect your phones to peripherals and use it like a desktop. There were, in those breathless early days, a lot of talk about how people simply wouldn’t own laptops anymore, how your phone would do everything. But it turns out that, for one thing, the keyboard remains an input device of unparalleled convenience and versatility. I have attempted to write things of substance on my phone out of necessity, several times, and then I go back to a (preferably mechanical) keyboard and it feels like heaven. We developed this technology for typewriters and terminals and desktops, it Just Works, and there’s no reason to try and “disrupt” it. And that’s a generalizable condition: everything devices are always going to be operating at a handicap compared to purpose-built tools that are better suited to specific applications. There was a certain logic to thinking, well, I’m carrying around a perfectly capable processor in my pocket, why shouldn’t that be my one processor for everything? But it turns out that big thinking of that type can’t ultimately compete with the ease and usability of laptops which, not coincidentally, have become remarkably cheap.
Instead of one device to rule them all, we developed a norm of syncing across devices and cloud storage, which works well. (I always thought it was pretty funny, and very cynical, how Apple went from calling the iPhone an everything device to later marketing the iPad and iWatch.) In other words, we developed a software solution rather than a hardware one. That was the other side of this breathless, hopeful period. In the first decade of widespread smartphone adoption, people were really into apps. There were so many apps! There were entire websites devoted to sharing cool apps. A select few have proven to be useful, long-term. Many, we learned, were just a given company’s mobile website with essentially no unique functionality. And the vast majority proved to be useless crap no one would remember a year after their unveiling. I will always give it up to Google Maps and portable GPS technology; that’s genuinely life-altering, probably the best argument for smartphones as a transformative technology. But let me ask you, honestly: do you still go out looking for apps, with the assumption that you’re going to find something that really changes your life in a significant way? All of the low-hanging fruit has been picked. What do you want to do with your phone that you can’t do right now? You can buy things with your phone, cool, I’ve never really seen the advantage of that compared to carrying a small and light plastic card around, but if that’s your thing, cool. What’s next? What do you want your smartphone to do that it can’t do now? I know I’m a pessimist but I genuinely can’t think of any actually plausible new capabilities that smartphones might have in three years that they don’t have now.
Of course, some people are big VR partisans. I’m deeply skeptical. The brutal failures of Meta’s new “metaverse” is just one new example of a decades-long resistance to the technology among consumers. People were talking about an imminent VR future when I was in middle school, after all. You can run a serviceable VR experience with a PlayStation and a headset that cost less than a thousand dollars now. So why hasn’t there been broad adoption? I suspect that most people are deeply uncomfortable with a) not being able to perceive their actual physical surroundings and b) wearing something on their face. Problems with nausea and headaches have proven to be persistent even as developers swear they’re being ironed out. You can say that this one is going to become suddenly and vastly more popular with technological refinement, but I’ll believe it when I see it. Then again, maybe I just don’t want VR to become popular, given the potential ugly social consequences. If you thought we had an incel problem now….
The elephant in the room, obviously, is AI. And I don’t want to reprosecute my skepticism; you’ve read it from me several times before. For the record I’ve never said that developments in LLMs and “neural networks” have no potential consequences for our society. It’s just that I think what’s actually remotely plausible within our lifetimes is mostly refinement rather than revolution, useful tools to automate repetitive tasks for human beings, reducing workload on programmers and eliminating some very specific types of work such as analyzing legal documents. There will be some changes to our labor markets, but then again every time technology has been predicted to cause widespread job destruction in the past, those predictions have proven to be untrue. (The trouble is that the specific people whose jobs have been disrupted often face serious personal hardship, even as the overall employment numbers don’t change, but this is a separate issue.) It’s not artificial intelligence. It thinks nothing like a human thinks. There is no reason whatsoever to believe that it has evolved sentience or consciousness. There is nothing at present that these systems can do that human being simply can’t. But they can potentially do some things in the world of bits faster and cheaper than human beings, and that might have some meaningful consequences. But there is no reasonable, responsible claim to be made that these systems are imminent threats to conventional human life as currently lived, whether for good or for bad. IMO. The world of atoms remains undisturbed.
I know I’m never going to convince most people that AI is not coming to rescue them from boredom and disappointment and let them live forever and bring back their beloved childhood dog Rusty and allow them to get kinky on the Holodeck. So let’s just set that aside for now. Let’s mutually agree to consider immediate plausible human technological progress outside of AI or “AI.” What’s coming? What’s plausible? The most consequential will be our efforts to address climate change, and we have the potential to radically change how we generate electricity, although electrifying heating and transportation are going to be harder than many seem to think, while solar and wind power have greater ecological costs than people want to admit. But, yes, that’s potentially very very meaningful. From the standpoint of futurism, though, switching to renewables (and, hopefully, nuclear) is a little disappointing - success in that effort arrives when the actual change to most people’s lives is minimal. We’re trying to make driving and owning electric cars as much the same as driving and owning an internal combustion engine car as we can, in order to make the change as unthreatening as possible. The same with switching to renewables. It’s another example of how technological growth will still leave us with continuity rather than with meaningful change.
Maybe CRISPR will be a big deal, if we set aside the many ethical complications with using it. Maybe! Maybe not. Time will tell?
Someone was talking to me about SpaceX and all of this rocket science and such that’s happening outside of the auspices of NASA. We had a little talk about the privatization of space, and Elon Musk, and the culture war. But what I kept thinking was, privatizing space… to do what? A manned Mars mission might happen in my lifetime, which is cool. But a Mars colony is a distant dream, one which requires developments that we can barely imagine now. Interstellar travel is pure fantasy. (Yes yes, we’ll cryofreeze people for the immensely long journey, by the way do we have the slightest idea how to “cryofreeze” animal tissue and bring it back to life? Oh, we don’t? Dang.) Maybe it’s cool that SpaceX might build its own analog of the International Space Station, maybe it’s malign. But either way, a world where that happens is indistinguishable from a world where it doesn’t, for the average person. This is why I say we live in the Big Normal, the Big Boring, the Forever Now. We are tragic people: we were born just too late to experience the greatest flowering of human development the world has ever seen. We do, however, enjoy the rather hefty consolation prize that we get to live with the affordances of that period, such as not dying of smallpox. We should bear that immense gift in mind. Because just as in the micro of our own lives, in the macro perspective on the human project, I think we all need to learn to appreciate what we have now, in the world as it exists, at the time in which we actually live. Frankly, I don’t think we have any other choice. Welcome to the Forever Now.