168 Comments

User's avatar
FionnM's avatar

I basically agree with everything you said in this article. IF AI solves many of our societal problems, it's unlikely to do so within the lifetime of anyone currently living; IF AI kills us all, likewise. I completely agree that a lot of the hype expressed by heads of industry and commentators in the AI space has been irresponsible and unmoored from objectivity.

Having said all of that, a point of constructive criticism.

There's a good reason that Bulverism (https://en.wikipedia.org/wiki/Bulverism) is considered a logical fallacy. Are some people AI utopians because AI utopianism serves a psychological need, a desperate craving to escape the mundane drudgery of ordinary life? Maybe! But if we could read their minds and confirm beyond a shadow of a doubt that such a psychological urge played no role in how they arrived at their beliefs - that wouldn't move the needle on "the case for AI utopianism" one iota. Either it's true, or it's false - WHY people believe it bears no relationship to whether it's true or false.* The psychological need that espousing a particular belief fulfils, and the truth or falsity of said belief, are orthogonal, wholly uncorrelated.

I'm not saying that your assertion (that people believe in AI utopianism/apocalypticism in part because it scratches a psychological itch for them) is false. I'm saying that, even if it's true (and it *probably is* for many), it's IRRELEVANT, because a sufficiently motivated debator can ALWAYS come up with a just-so story to explain why the only reason their opponent believes X is because X fulfils some psychological need, perhaps a need that the opponent isn't even consciously aware of. E.g.:

Atheist: "The only reason you're religious is because death and the inherent meaningless of life terrifies you!"

Christian: "The only reason you're an atheist is because you want the freedom to act immorally, without fear of punishment in the afterlife!"

Evolutionist: "The only reason you're a creationist is because the idea that humans are made of matter and are not divine makes you uncomfortable!"

Creationist: "The only reason you're an evolutionist is because you can't tolerate the idea that the human body was designed to perform certain actions and not others!"**

If anyone with five minutes of introspection can come up with a just-so story as to why belief X fulfils some subconscious psychological need for the person espousing it (even for beliefs which are obviously, unambiguously, factually true), it's a fully general argument which can be deployed in any context by anyone with any axe to grind, and is hence meaningless. I think your arguments would be a lot more persuasive if you stayed focused on the object-level question of "Given the current state of the evidence, is AI utopianism/apocalypticism justified?" rather than writing so many words about the meta-level question of "What psychological need does the belief in AI utopianism/apocalypticism fulfil for those who espouse it?" Joe's (sub)conscious motivation for believing X has NOTHING TO DO with whether or not X is true.

*One data point: you yourself have expressed exasperation with self-described Marxists who have never read a word of Marx. It seems fair to say that their motivations for believing in "Marxism" as they understand it must be different from yours. The fact that such people exist hasn't caused you to conclude that Marxism is wrong, nor should it. Or as Daniel Dennett would say, "There's nothing I like less than bad arguments for a view I hold dear." Or indeed Orwell: "As with the Christian religion, the worst advertisement for Socialism is its adherents."

**I'm hoping I passed an intellectual Turing test and it isn't obvious which side of either debate I personally fall on.

Expand full comment
Mari, the Happy Wanderer's avatar

This was a fantastic essay. I agree that it is a human failing underlying AI doomerism that we are impatient with the quotidian, and some of us wish that something exciting would happen--if not paradise, then let it be the apocalypse.

But I think two other, more positive human traits are involved too. One is our yen to be in the know, to be in on the secret, to delve deep and learn. So many AI doomers seem to view those of us who aren’t worried about those juicers or paperclip makers or what have you as naive and ignorant. And yet we’re the ones who notice that robots can be defeated by a wrinkle in the carpet, self-driving cars by a traffic cone, and advanced computer processors by a power surge. We regular people may not have the abstruse AI knowledge of Yudkowsky, Ezra Klein, and other AI doomers, but we can observe the actual world and draw our own much more sanguine conclusions.

A second positive human trait the AI doomers share is a protective impulse. I am old enough to have grown up when everyone (except, it seems, me) was afraid of nuclear war. I remember hushed conversations on the playground, when a boy (it was always a boy--and always a very kind boy) would tell us that Minnesota would be the first place the Russians would bomb, because Honeywell was headquartered in Minnesota. These well-intentioned boys would share ideas about what we could do to protect ourselves from the coming catastrophe.

AI doomers remind me so much of these kind, well-intentioned, misguided boys on the playground. They pick up what they think is hidden knowledge (e.g. Minnesota’s alleged vulnerability), they worry about it, and they share their worries with the rest of us blithely oblivious people, not just because they find our world too dull, but because their imaginations are running amok, and they want to quell their anxieties by sharing them.

Expand full comment
166 more comments...

No posts