The AI Doomer Scenario Relies on a Key Event That We Have No Reason to Believe Will or Can Actually Happen
call it "runaway AI" or "the singularity" or whatever, it's not a scientific concept
Comments are going to stick to the topic today, and not be about the last couple of posts, on threat of a month ban for whoever can’t comply and, eventually, me just shutting down the comments entirely if the problem persists. So please don’t ruin it for everyone else and keep your comments on-topic.
My search for skepticism in mainstream media coverage of artificial intelligence continues. It seems like such a wide-open lane, an obvious topic to exploit in an industry where every subject has already been saturated with commentary. I have been pointed in the direction of many smart people who write about exactly these topics, but they’re just about universally writers like me, lone voices at independent blogs or newsletters, not people working for the biggest and highest-readership publications. (There are certainly exceptions, and this Ted Chiang piece is a good example, but note that it was published before people got really overheated about this topic.) I’ll say it again: some ambitious young journo should be aggressively pursuing the AI-skeptic beat. There’s a career to be made there.
The claims you hear, after all, are not just that AI is important or that AI will change the economy and world. The AI doom scenario, whereby a superintelligent AI literally takes over the world and either enslaves or eliminates humanity, is given credulous attention in the most stuffy and restrained of fancy publications. (The AI utopia scenario is essentially the same prediction, but it gets fewer clicks so you read about it much less often.) But this scenario requires an entirely unproven leap - the AI “singularity” or runaway AI, a popular notion which argues that once AI gains a certain level of capability, it will become self-improving and then quickly develop God-like intelligence, at which point it will escape its human-constructed confines and decide to save/destroy the world. Absolutely every part of that is speculative at best, but then speculation is necessary to get to such scenarios. Almost no one thinks the large language models or “neural nets” that are currently used are self-improving in the way necessary for the singularity, could achieve individual agency, or could develop both the intention to break out of their current computing siloes and the superintelligence necessary to take over the world once they do. So there has to be this huge, sudden development in AI that a) makes self-improvement possible, b) leads to superintelligence, c) inculcates intention in these systems, and d) inspires this superintelligence to want to destroy humanity. And it’s truly wild that our communal conversation on this topic just accepts all of that as inevitable.
Call it what you will, singularity or runaway AI or whatever - it’s a speculative, theoretical idea, not a well-established scientific concept. I cannot stress enough that there is nothing resembling adequate scientific evidence to predict with any level of confidence that such an event will ever happen, let alone happen soon. While it’s been popularized to death in recent discussions about artificial intelligence - inspired by neat tools that can pull off some useful tricks but which have not had revolutionary impacts by any definition - the runaway AI concept lacks concrete evidence and is based more on extrapolations and hypothetical scenarios than on empirical observations. There are no equations that some computer scientist did that proves that this will inevitably happen, and there is a constant slippage in this sphere between science fact and science fiction.
Here’s some reasons why we might argue that the AI singularity is not a scientific concept with a strong evidentiary basis, in handy numbered list format:
Undefined/Unobservable Nature: The AI singularity is often described as a point where artificial intelligence surpasses human intelligence and becomes self-improving at an exponential rate. But this concept is really challenging to define precisely; no clear criteria or observable phenomena have been established to identify when or how it would occur. We don’t know if the singularity can happen in part because we don’t know what the AI singularity really is, in the most essential technical terms. This is of course very useful for journalists chasing pageviews, researchers chasing grant money, and companies chasing a better stock price - the more wiggle room there is in these concepts, the more grand the claims about them can be. With no clearly-defined criteria for declaring runaway AI and no specific empirical test for determining when one is occurring/has occurred, there’s no way to ever declare specific predictions to be incorrect. That which is not definable cannot be falsifiable.