156 Comments

1. Scott and ACX readers seem to have a beef with you.

2. I suspect that we are living in revolutionary times, not merely because of impending technological advances, but because the likelihood that the tools we already have will be abused approaches 1:1.

What we have now would make a Himmler, a Goebbels, a Vyshinskii weep hopt satly pony tears of joy and envy. Already, people of influence and authority are licking their chops.

Expand full comment
author

Millennarianism tends to appeal to a certain kind of person that has a lot of overlap with the rationalist community

Expand full comment

"Rationalizationist", is that a word?

'Cause it describes some there.

Expand full comment
Sep 16·edited Sep 16

Rationalism is a specific philosophy promulgated mostly by 90s silicon valley libertarians. It shares a lot of overlap with medieval beliefs in demons and the influence of the numenous in all things but for them god is a robot they'll eventually build. They're occasionally right about stuff but mostly they're extremely funny and more likely than most people to end up in a mystery cult. I love them.

Expand full comment

Why do you love them (serious question)? I rather hate them. I think the god they worship is not a robot but themselves (the Creator of the robot). I would find them an interesting curiosity, like other cults, except that they control vast wealth and thus vast power.

Expand full comment

Oh no, they abase themselves before the machine. They're like warhammer 40k techpriests but instead of being science fiction characters they're mostly lapsed orthodox jews and catholics looking for meaning in a world where they've already rejected the premises that gave them purpose (but not the architecture of those beliefs). They're an incredibly demonstration of human beings' attraction to demon worship and mysticism unless we are inoculated against it through some less harmful structure. They're great.

Expand full comment

Why do you love priests of a harmful mysticism, whether they are self-abasing (as you think) or self-aggrandizing (as I think)?

My frame is not sci fi but the history of a rise of a Euro-Christian cosmology (religious and secularized versions) that has a legacy we might not survive. At a moment when that cosmology is losing its claim as "reality," and is having to confront itself as a provincial view, these priests are desperate to reinvent it. It provides their claim to partake of the one truth (either as priests or as gods).

Expand full comment

I don't love them or hate them, but their "rationality" seems to be a cover for supporting what they want to support.

Like how Ayn Rand supposedly claimed smoking tobacco was somehow "rational". Of course, being Ayn Rand, she claimed to be doing this for the most high-minded and principled reasons, not because she smoked and liked smoking.

Expand full comment

Nah, they've got very specific beliefs about the nature of the world that are internally consistent and also bananas. You're hung up on the definition of the word rational and it's misleading you about their nature.

Expand full comment

I am not hung up on the word "rational". They are.

Expand full comment

Rationalizationist ... a self-hypnotizing word for " I Am Very Smart ". It is a Scientology adjacent cult of big word users.

Expand full comment

On the contrary, Scott has repeatedly praised Freddie, and he's quite popular in the rationalist community!

Expand full comment

Ok, maybe I misinterpreted. The comments seemed bitchy, but maybe I am not getting the bigger picture, here.

Expand full comment

Scott only writes responses like this to people he respects. It is Freddie who has beef and often acts bitchy in the ACX comments.

Expand full comment
23 hrs ago·edited 23 hrs ago

Someone once described Scott as a child in a cardboard rocketship who is ignorant of the fact that the rocketship isn't real, and that seems spot on to me. He actually believes we're moving towards a wonderful future and things are better than they have ever been. It's adorable in a sad way.

Expand full comment

There is also the other side of the argument where people are incredibly uncomfortable with all the magic and wonder that will occur after they are gone. It’s more comforting to think the future will resemble the now than to think of all that will be that one won’t be around to experience.

Expand full comment

That's true of every human being since the Dark Ages and it applies equally well to everyone born in the future.

Expand full comment

No, life was pretty much the same for the average peasant for hundreds of years during the Dark Ages. They had no expectation that the future would be any different.

Expand full comment

"Since" the Dark Ages. And apparently it's a myth that technological progress stagnated during the Medevil period as well.

Expand full comment

Yeh a “myth” some desperate history phd trying to get tenure by writing a paper, The Dark Ages, Actually…”

Expand full comment

Did you even read the article?

Expand full comment

I mean there certainly were innovations, but they didn't affect the average person THAT much. More "slight improvements to existing life" innovations than "change the way you live" innovations that we have now (plumbing/electricity/medicine/telecommunications/etc.)

They would think their children were destined to grow up in much the same world in a way we don't anymore.

Expand full comment

What was true in 1910 was still true in 1950 in rural SW Missouri: we “did not have indoor plumbing, meaning [we] used an outhouse, got water from a well, could not routinely bathe or wash [our] hands, and was subject to all manner of illness for these reasons, to say nothing of the unpleasant nature of lacking these amenities.” But we did not have “all manners of illnesses”—perhaps because we were clean people even if only a Saturday night bath, since cleanliness was next to godliness in our families. We also had plenty of healthy homegrown foods. And we did not have any idea whatsoever that our life was “unpleasant . . . lacking all these amenities.” It was our mundane world, everybody we knew lived just the same, so it was just life. And it was good.

Expand full comment

Yeah, my great-grandparents were around in the fifties, they didn’t get indoor plumbing til the 70s or so in rural TN

Expand full comment

Yeah. My own mother spend her early childhood in a two bedroom cabin with no plumbing and electricity, out in the open plains of Wyoming. And I can't recall her ever complaining about it, or anyone getting sick and dying. In fact, she often speaks fondly of those simpler times. If anything, compared to others in their 80's, she's a lot healthier for it.

Expand full comment
founding

The material conditions you describe were the same in 1950s rural, SW MN for my father and he felt that life was decidedly not good. He was poor, knew it, and hated it. There was also no comfort in knowing that other farming families were also poor when he could see how the well-off lived.

Actually depending on the output of the home garden to avoid starvation, which he had to do, tends to tarnish the romanticization of `healthy homegrown foods,' too.

Given that people can choose to live as you describe now and largely do not ought to indicate how un-good things actually were.

Expand full comment
Sep 16·edited Sep 16

I am sorry your father had a different experience than I did. Things are certainly easier now . . . but not sure better. In my case, the home garden was a family project and the food delicious and plenty of it, supplemented by the pigs and chickens we raised, fish we caught, rabbits and squirrels we trapped and shot. And everybody in our little rural community was the same degree of poor within a narrow range. We were bound by common ties of family, school, and church. I resonate with the t-shirt I have seen: "I may be old . . . but I got to see America before it went to sh*t."

Expand full comment

Freddie - I feel like you are having a philosophical debate underpinned by data, and the interlocutors want to have a debate about the data (with a lot of ‘line go up and right’). Do you feel like this is a fair characterization?

Expand full comment
author

To an extent? But again, I don't think the idea that there has been a dramatic slowdown in technological growth in the past half-century is really a philosophical idea, nor do I think it's empirically disputable. That economic productivity has slowed is simply undeniably true; measuring technological growth is a little fuzzier, but not that much.

Expand full comment

It’s incredible that that exponential graph you show failed to mention sanitation, the germ theory of disease, and antibiotics but has Windows and Mac as two separate milestones

Expand full comment

The exponential "graph" is nothing of the sort.

Expand full comment
founding

Also, 3d movies? We did have those back in the 1950s, if I'm recalling correctly.

Expand full comment

I was wondering what they meant by that, like if they meant VR or something

Expand full comment
founding

I'm sure the person who put the chart together has the same amount of technical depth as your typical Verge or Engadget reviewer. Knowers and users of consumer electronics but wouldn't know how to build a one-bit adder.

Expand full comment

It seems to me that the Techbros and their acolytes are prone to hubris, a very common human failing.

Expand full comment

Are techbros really a thing? I've always considered the term to be a transparent and clunky attempt to associate people in the tech industry with vulgar frat boys, not an accurate description of the kind of people who work in it.

Expand full comment

Yes. What you mention is an issue, though, because the people who came up with the term didn't care about distinguishing the self-promoting narcissistic bullshit artists from the socially unsavvy nerds with a propensity towards autistic traits, even though the core tendencies are polar opposites.

Expand full comment

On the contrary, in most cases the former tendency is rather transparently a compensation for the latter. That combo is exactly what people mean by "techbro"

Expand full comment

Real nerds are focused on their interior worlds, not on the external social sphere.

Expand full comment

I'd say it depends on how you parse "people in the tech industry." It seems obvious to me there is a set of men in tech who obsess together about: how to overcome (their) death; how to colonize space; how to sweep away the contemptible fantasies about equality common among women and other weak-minded humans so that they can establish the true aristocracy that will save the world, i.e unhampered rule by the smartest men who understand computation.

In other words: not frat boys but would-be demi-gods.

Expand full comment

Yes, this is what I was getting at. Thanks for stating it much better than I did!

Expand full comment

That's why I don't understand the "techbro" term. To me the "bro" suffix evokes a jerk, not a megalomaniac.

That being said, of the three traits you listed, only the last one is really morally problematic. Not wanting to die is understandable, and it's virtuous if it leads to the development of anti-aging techniques that can be used for everyone, not just you. I don't know if space colonization should be an urgent goal at this point in time, but it is something that would probably benefit humanity if achieved.

I think that demonstrates my major issue with the the tech industry is treated in part of culture. "Techbros" are bad because they are ambitious and want to make a huge impact on the world, but it seems like wanting to make a huge positive impact is viewed nearly as poorly as wanting to make a negative one. That strikes me as Tall Poppy Syndrome.

Expand full comment

For me the titans of tech, whatever the nickname, are morally problematic because I think they have stunted moral imaginations. (They would no doubt say the same of me––that I am jealous of their greatness and therefore spread liberal delusions and hold back what the most gifted people could achieve.)

For space colonization to be something that "saves" humanity, for instance, you have to have a future that is truly horrifying. It would only save humanity if the planet were fast becoming uninhabitable. Setting up even a whole series of space outposts to keep the species going would mean only a tiny portion of people (1% would be amazing) would escape the global collapse. So that prospect can only be an exciting, heroic achievement if you identify with the small band of people who managed to live elsewhere––not the near total elimination of what human existence has been heretofore.

I see the effort to defeat death as a childish refusal to accept what it means to be a human. It's not very rational to see it as a pursuit of a gift for all humans rather than a project that quells their own panic, since the ramifications would also entail something totally unlike what human life on earth has always been.

Expand full comment
Sep 17·edited Sep 17

I actually think it's a mistake to frame the case space colonization purely in terms of extinction risk management. I think it's good for there to be communities of humans on other planets ine the future for the same reason that it's good for there to be communities of humans on other continents in the present: More communities of human beings living lives worth living is a good thing, all else being equal. If having one Earth is good having more is even better. There's no reason we can't get more while also preserving the original.

That being said, 100% of the human race being annihilated is clearly much worse than 99%, not just marginally worse. That isn't because I am identifying with the small band, it's because of future generations. If 100% of humans died that isn't just the end of them, it's also the end of all humans there would ever be. If 99% die then the 1% can rebuild the population. It already happened once, a volcano in Africa killed nearly all humans 74,000 years ago. Isn't it a good thing a few survived so we could be here today?

The argument that death is part of being human reminds me of arguments the Victorians made that anesthesia was bad because pain is part of being human. As someone who has had surgery, I'm glad people didn't listen to them. I'm sure in the Middle Ages some people thought infant mortality was part of being human and that outliving half your kids was an essential part of being a parent.

Even if death is part of being human, there's no good reason to think our current lifespan is optimal. Maybe death at 250 or 500 is even more human than death at 75.

I don't think it's childish to want to find ways to overcome your limitations. Accepting things if there's no way to change them can be a sign of maturity, but accepting things that might be in your power to change is dumb.

Expand full comment

Thanks for the thoughts. I understand your point about the potential value of space colonies apart from extinction. But from what I have seen, the rich men who are most committed to that project are claiming––explicitly and implicitly––that this is valuable to humanity as a mode of rescue from extinction. It is their excitement and heroic satisfaction that I find a symptom of their warped moral imagination. Who could greet that future with a sense of excitement and achievement? It would mean that our species, created by the matrix of the earth, had failed utterly when it could have been otherwise. Wouldn't the success be vastly overshadowed by the failure? To think of it as a success relies on the perspective of a sci fi movie goer who never actually faces any existential horror after the end of the movie.

I'd also question whether the simple matter of quantity––more life is better than less life––is relevant for space colonization. Because the billions of dollars it would take to build and sustain a colony for, say, 100 lives could save lives on earth in numbers that would amount to a different order of magnitude.

Because I'm a cultural historian, the fact of human mortality doesn't seem at all comparable to a blinkered Victorian ideology about pain. There are millennia of philosophical meditations about what it means to be a human, and virtually all of them teach that humans are blessed/burdened with creating meaningful lives because they are the animals that know they will die. Grasping that fact as a given as one works out philosophical thought is a different matter than reflecting on longevity or health, etc.

Expand full comment

I should have been more precise in my comment. When I referred to Techbros, I meant the titans of tech, the founders and owners, all the newly minted billionaires. And if anything will lead a man to hubris, it's becoming a billionaire.

Expand full comment

It was an anti-male slur invented around 2015 by people who needed the under representation of women in tech to be due to misogyny, and not the fact that high achieving women just prefer to study medicine and law.

Expand full comment

I can assure you techbros exist. I was living in San Francisco and around 2007 or so there was a highly visible explosion of them , they were everywhere. Young men, highly paid, at a time of huge rapid expansion in the tech industry. (iPhone, Twitter, FB, Insta etc). They were legion, had a lot of money and a certain swagger (revenge of the nerds). You're waiting for the MUNI bus and its space is taken by the luxury Google bus to shuttle the bros down to San Jose or wherever. The pool table at your Castro gay dive bar repeatedly taken over by bunches of these non-gay guys laughing and having a blast. The prices of almost everything suddenly soaring. Techbros emerged as a new affluent upper class in SF quite quickly.

Expand full comment

I'm sorry, in what sense do you think Scott Alexander is a "techbro"? You wanna level that at Elon Musk or a few other billionaires in tech, fine, whatever. I think it's lazy and dismissive, but most labels are. I don't really see how Scott Alexander falls into that bucket.

Expand full comment

Was referring to Musk and his ilk, not Scott Alexander.

Expand full comment

Maybe the problem here is mine, but this is was an exchange between FdB and Scott Alexander. You made a comment without attribution and I assumed you must be referring to one of the conversants. It obviously was not FdB, so that left Scott.

Expand full comment

What's really weird is to take a comment about a third party as some sort of an affront to you. These writers are not cult leaders and don't need their followers to protect them.

Expand full comment

I don't really agree with Alexander here. My reaction was not "protecting" Alexander; it was a reaction to a label that seemed grossly misapplied. Turns out, it wasn't being applied to Alexander.

Yes, I don't particularly like the "techbro" label. I see it used the same way I see "liberal snowflake" and various other politically-coded labels that are intended to denigrate and dismiss the people they seek to describe.

Expand full comment

I think that, going forward, AI will largely be operating in the background rather than taking any form of superintelligence or consciousness. The AI doomers have a hard time accepting that this technology is being developed by corporations, who are primarily going to be interested in whether it’s profitable, not whether it’s conscious or lifelike. So the more novel aspects will fade over time and it will be integrated into the tech we use. People will still largely want to interact with other people, not machines.

Expand full comment

I think AI doomers are more afraid of an AI that isn't designed to be conscious and lifelike than one that is. A human-like AI might have human-like emotions and values you can appeal to. A corporate efficiency AI wpaperclip. not.

Remember one of the most famous thought experiments in AI doomer literature is the "paperclip maximizer," an AI that an office supply company programmed to make as many paperclip as it could, and proceeds to convert the Earth and everyone on it into paperclips.

Expand full comment

Right, but this is why thought experiments often have limited real world value. For this thought experiment to provide insight into the real world, you have to explain how an office supply company acquires and invests the resources necessary to create a superintelligent AI when those expenditures would probably bankrupt it before it ever got close to accomplishing anything. You’d also have to explain how the AI “escapes” and keeps acquiring resources when it’s no longer controlled by the office supply company. You also have to explain why the military would not destroy the AI before it got too far towards its goal, etc.

The response to these questions seems to be something like “I’ve assumed the AI is all-powerful.” That’s fine, but then you’re doing something closer to writing science fiction rather than forecasting AI’s impact on the real world.

It reminds me of the crypto boom, when companies like Celsius were boldly declaring that crypto had transcended the normal constraints of traditional finance, that banks were evil, etc. If you joined their revolution, this technology would change your life and make you rich. In fact, they were just running a mundane Ponzi scheme.

This what Freddie is talking about when he says that you have to accept that you live in a mundane world. The mundane world requires you to do mundane things most of the time, especially once you’re an adult. AI is not going to save you from this. The joy in life comes from finding meaning in the mundane, from connecting with other real human beings, from doing the things that human beings have done for centuries to create meaning. That can be beautiful, if you accept it.

Expand full comment

Escaped AI doesn't have to be all powerful, it simply has to have agency, the compute power of a data center, and access to network connections. With those ingredients, it's a threat that humans have very little idea how to counter, and there's no way to know what it could accomplish. Could it hack power stations? Could it smuggle a Trojan copy onto a phone that then connects to sensitive industrial or military networks? Could it use text to convince humans that they are being ordered or paid to do certain meat space actions? Could it access bank networks and use millions of dollars to pay humans to act certain ways? Could it use social media to create mass action of some kind?

These are all mundane actions which are currently not really a threat because there is no single intelligence that could do enough things at once to be a threat, but once you have enough compute power it becomes feasible.

Expand full comment

Yay, Substack has revived what used to be my favorite part of the New York Review of Books, the letters to the editor column where intellectuals would battle back and forth (e.g. Gould vs Dawkins).

Expand full comment

Agreed; I read this exchange with interest. I think SA rightfully criticizes a bunch of the points FdB made about number of years humans are alive, etc., but ultimately I'm not persuaded that FdB is wrong about the idea that our innovations are slowing. I'm not entirely persuaded he's right about the future, but that's the great thing about the future: we don't know what is going to happen.

Expand full comment
founding

The pettiness displayed by imminent individuals during such exchanges was also incredibly humanizing. Certainly kept me from elevating them too highly.

Expand full comment

The chart that jumped from telegraph to lightbulb to car. Missed MARCONI-understanding the electromagnetic spectrum-and how to use it

Ham radio will work with batteries or solar.

Expand full comment
Sep 16·edited Sep 16

I think it's really interesting that on the chart you posted, 3d movies (a gimmick, already forgotten, the only two good 3d films ever were Dredd 3d and Avatar) is given the same sort of significance as the invention of the steam engine. I know you basically make this point but your example (the iphone 14) really is an advance, rather than just a dead end; it will lead along some kind of iterative path to superior technology and there's no reason to be convinced that it isn't significant (just like advancements in the steam engine from a curiosity to the force that drove the industrial revolution were significant). 3d Movies are an obvious dead end.

Expand full comment

Thank you, Freddie. Whenever I read the doom-laden prognostications about paperclip maximizers and fantasies that we’re living in a simulation, I channel my inner Samuel Johnson, who, when asked how he could refute Bishop Berkeley’s theory that we are all being deceived by an evil demon, kicked a rock. “I refute it thus!”

Or I think of the underpants gnomes—step one is to steal everyone’s underwear, step three is to make a huge profit, and they haven’t quite worked out step two. AI doomers’ step one is our current driverless cars that are almost totally incapable of functioning on open roads, LLMs that produce deadened prose that anyone can instantly identify as having been created by a machine, and similarly underwhelming examples of computer intelligence. Their step three is the extinction of humanity. How?! Like the underpants gnomes, they are missing one heck of a step two.

And then there’s Occam’s Razor. When we’re confronted with the choice between a simple, obvious explanation and a baroque, fantastical one, simplicity is likely to be the truth. The obvious and simple idea is that AI will become a tool that will be useful in some situations and that will lead to the destruction of some kinds of jobs (and the creation of others). Exactly like every other technological development ever. We have no evidence that it is capable of doing anything else, and we ought to be suspicious of claims that while we haven’t seen any evidence of AI malfeasance, it’s coming any day now, and then watch out.

Expand full comment
Sep 16·edited Sep 16

I'm somewhere in the middle on this. While I'm significantly more concerned about humans misusing the combined power of AI and robotics to do terrible things (versus AI turning on us on its own Skynet style), I think the former is scary enough that we're entering a new era. And I don't completely rule out the latter as a possibility.

Meanwhile, I think it's interesting that you cite "our current driverless cars that are almost totally incapable of functioning on open road" as an example of the technology underwhelming. Why do you believe driverless cars are incapable of functioning on open roads? They already are (https://waymo.com/faq/). While it's true that for a combination of legal and commercial reason, you can't hail a driverless car in most places, the blocker really isn't the tech. If for some reasons humanity became incapable of driving, self-driving cars could quickly be scaled up everywhere.

Expand full comment

I could be wrong, but I have read that driverless cars only function well in certain closed neighborhoods in, for example, San Francisco, and that even then they make boneheaded errors that human drivers would never make. It’s not like they’re ready for the New Jersey Turnpike yet, and still less for normal traffic in, say, Boston or Chicago.

Expand full comment
Sep 16·edited Sep 16

They can drive anywhere in San Francisco. And you see them on the freeways as well (although they can't yet legally take passengers on the freeway).

I think in non-snow and ice conditions (where apparently they still have issues) they could handle anywhere today (including traffic in Boston and Chicago) once they have done the necessary pre-mapping in those cities (which is why I say they're legal and commercial barriers). Matt Yglesias has a good post about where things stand and where things are going (or could go) that you might find interesting:

https://www.slowboring.com/p/self-driving-cares-are-underhyped

Now what you might reply that the fact that Waymos' approach requires pre-mapping (which human driver obviously don't in the same) illustrates the limits of AI, which was the real point of your comment. And that's not unfair. It's certainly true that AI solves problems in different ways than human beings, in ways that we wouldn't necessarily describe as "intelligence".

At the same time, once AI can perform tasks better than human beings can, I'm not sure whether it matters whether it is demonstrating "intelligence" or not. AI is already better than human beings in domains like games. Arguably, it is already better than humans at driving as well (https://www.understandingai.org/p/driverless-cars-may-already-be-safer), which is one of the domains you named to illustrate their shortcomings. Which is a long way of saying that I would be leery of being overconfident in the boundaries of what changes it may quickly lead us too.

Expand full comment

Sounds like my information about the cars is out if date. Thanks for the links!

Expand full comment

<pedantice>

"I channel my inner Samuel Johnson, who, when asked how he could refute Bishop Berkeley’s theory that we are all being deceived by an evil demon, kicked a rock. “I refute it thus!”"

...which is, of course, not a refutation at all; if we are caught in a simulation, the act of kicking the rock is part of the simulation. The correct "refutation" is to say that if we are in a simulation or not is irrelevant. "We are all really in a simulation" is an unfalsifiable claim. Any method of proving that false is simply part of the simulation too. So it's an unknowable question. Unknowable questions can be fun to consider sometimes, but as a practical matter, they are always, always irrelevant.

</pedantic>

I think the kernel of their concern is pretty clear, and not very hard to understand, and not at all crazy. However, as far as I can see, the entirety of their argument is as you describe: gnome underpants. They believe AIs could, in ways that humans really never would, take a set of optimizations that lead to insane outcomes. That does seem quite possible, even observable. However, they also suggest that it could happen so fast that we'd never have time to control it or influence it in a way that made it compatible with our continued existence. Yeah, that's *possible*, but I don't see that it's particularly likely. (I hasten to add that the people who discuss these things are typically far smarter than I am.)

Expand full comment

Oh, one reason I love Johnson’s stone-kicking refutation so much is for the very reason that it isn’t a logical refutation. It’s more like, “Come on, man! Don’t be ridiculous!” Which is what most of us normies think when we hear these ideas.

I like your point that the best response is that whether we are in a simulation or not, it’s really irrelevant to how we all spend our time, so why even worry about it?

Expand full comment

Yeah, that's why I put it in pedantic tags. Something in me compels me to post the "correct" refutation, but it's something I recognize is irrelevant to most people. I nearly removed it from my response entirely, and probably should have. The idea that we might all be living in a simulation being taken at all seriously should have died with Descartes.

Expand full comment

“it’s still inarguable that meaningful technological growth has dramatically slowed in the last 50 years compared to the 100 prior years”

Inarguable - you keep using that word; I do not think it means what you think it means.

Expand full comment
author

fine, indisputable dick

Expand full comment

I mean, Scott literally disputes it. You two don’t agree on basic facts here, so stating them as “inarguable” or “indisputable” is not especially convincing.

I get that many ppl take pot shots at you, but some of us actually respect you even when we disagree and are just trying to lighten the mood.

Expand full comment
author

Can you possibly be the case that you are unaware that people routinely call things indisputable even though some dispute it? Have you survived this long never to experience that?

Expand full comment

I am aware that people use “indisputable” as a rhetorical dodge, yes. Do you find it convincing when people argue in this way? I certainly don’t.

Expand full comment

I think the initial argument for large rapid changes brought on by AI makes some sense. If we are suddenly able to build tons and tons of qualified scientists and engineers instead of waiting for them to grow up and graduate college, we can probably get a lot more technological advancement done than we are now. This is especially true if the AI in question isn't just as good as a human engineer, but better. We will simply be able to direct far more effort at the problem than ever before.

The doomer scenario is basically that an AI, or network of AIs might become so smart that it figures out how to take over the world before we notice and stop it. That seems like a remote possibility, but not totally impossible. I'm glad somebody is worrying about it.

Where I largely agree with Freddy is that LLMs are probably not a sign that such a scenario is imminent. They aren't really smart the way we would need those super-engineer AI to be, they are at best a small stepping stone.

I think Freddy is also overstaying his case when he goes from, "we won't see wild sci-fi scenarios in our lifetime" to "those scenarios will never happen." Progress is slowing down, but as long as it keeps going at all, we'll probably achieve some pretty cool stuff in the next few millennia.

For example: I don't think that ideas about "uploading" are any more dualist than the concept of hardware and software is. You can take software off one computer and run it on another. If they have very different hardware and OS it might be hard, but you can do it. I don't know how practical it is, but it doesn't seem impossible that we might someday know how to move the information in our brains into different hardware.

Another example: we have 5 billion years before the sun expands. Even if progress stagnated even more than it has now, that's a lot of time. We can figure something out to leave the system by then. Maybe instead of terraforming exoplanets we'll genetically engineer ourselces to live on them. Give us some credit.

Expand full comment

Or maybe humanity will simply go extinct in a few years. The future is tough to predict.

Expand full comment

Maybe, but barring that, or some other disaster like civilization collapsing to the Stone Age, it seems likely that we will achieve some astounding things in the next billion years.

That's why I think Freddy is overstating his case. It's defensible to see that we won't see much progress in our lifetimes, and that progress will be slow enough that humans will get used to it and not think it marvelous. But I don't think it's defensible to say that the world of the year 10,000, or the year One Billion, won't be astonishing to the people of today, or that we won't find some way to overcome a lot of seemingly insurmountable problems.

Expand full comment

Assuming the species doesn't go extinct. How many species make it past a million years? Or civilization collapses, or periodically collapses every few thousand years.

Expand full comment
founding

`If we are suddenly able to build tons and tons of qualified scientists and engineers'

AI is not remotely capable of doing this. It will likely replace poorly-educated engineers/scientists performing more-or-less rote work, which is why we* teach students to use AI as a tool, not an end in itself.

*engineering professor.

Expand full comment

I agree that the current technology people are calling AI, LLMs using machine learning, are definitely not capable of doing this. People who think that it will be able to are definitely jumping the gun.

However, just because people are jumping the gun now doesn't mean that the gun won't ever fire. I don't think an AI capable of original and creative engineering work is possible in the near future, and I don't think we can get one merely by making incremental improvements to LLMs. However, it seems far from impossible that we might eventually develop new techniques that allow us to make one.

Expand full comment
Sep 16·edited Sep 16

My read of Scott Alexander's post: it seems to be more about negative things happening such as ecological disasters, while Freddie's argument is about the stagnation of positive technological advancement. Also, as anyone familiar with the history of World War I knows, wars stress fragile empires and lead to revolutions. The wars happening now, or that could occur in the Pacific: what revolutions might they trigger?

Expand full comment

I don't know why the word "mundane" gives me the wrong feeling. I agree with the idea that life won't be better for anyone in the sense of how we perceive it due to any technology. As Freddie shows, everything becomes a commodity anyway. We are relative creatures and amazingly fast at reaching new equilibriums. But life for anyone can be something more than mundane, at least how I perceive that word. And as far as the growth argument goes, on the one hand who cares, if you don't believe tech matters in our perception of our own life then why care about tech's growth rates. On the other hand, we are predicting the future and in our case unlike someone hundreds of years ago predicting that future is awfully hard to do. What my grandkids work will entail is a complete mystery to me. But that being true does not make the time I live in any more exciting/scary/non-mundane than the person from 100's of years ago.

Expand full comment

Reading your back and forth with Scott, the one issue I keep coming across in my mind is Black Swans.

I’m not saying Freddie is wrong, but the impression I’m getting from his argument is “things are slowing down and will continue to slow down” which very well can be true! But also I think that leaves us potentially blind to world altering possibilities that can and have happened. Even if it’s a 1-2% chance, that consequences are so huge it’s worth having active conversations around the potentiality.

So I do find it useful to have people like Scott speculate on and discuss how possible reality changing occurrences are because it’s a historical fact that they do occur and often catch us totally unaware.

If people want to act weird and throw their lives away hoping for tech Jesus I blame Scott as much as I blame Jodie Foster for Reagan getting shot.

Expand full comment

"...the transition from the original iPhone to the iPhone 14 (fifteen years apart) is not anything like the transition from Sputnik to Apollo 17 (fifteen years apart)..."

Well...this is almost entirely due to Apple's insane planned 'obsolescence', which isn't really obsolescence at all but mostly just them making each software update harder and harder to mesh with the older hardware. I mean, cell phones should last 1 decade, not 1 year. That they can't make the software updates apply to 'older' hardware is not impossible, it's intentional.

But I get what you're saying.

Expand full comment
founding

Cell phones can and do last a decade. They just happen to do so without the software updates you seem to believe are so trivial. Consider simply the changes to base processors, memory, and GPUs in older versus newer iPhones to understand why it is very difficult to write and maintain software across generation of devices, let alone add new features.

Browsing the internet, or watching a video, on a decade-old phone is a decidedly worse-off experience than using one produced in the last few years.

Expand full comment

Ah that makes sense. I guess I always thought of watching a video or browsing the internet on a phone to be trivial itself. Still can't understand how that tiny screen is not annoying enough to only use it out of necessity. I still use them to just make calls and send texts, showing my age I guess.

Is that really the primary reason Apple comes out with new phones every year though? To make social media and movies better and better on your phone? Reminds me of the apt maxim: "We wanted flying cars, instead we got 140 characters." I think I just expected tech to produce more wondrous things, instead of quadrupling down on tiny mobile streaming devices. :-/

Expand full comment