No, I Mean It - AI Maximalists in the Media Should Really, Actually Take the Shitting-in-the-Yard Challenge
think of the #content potential guys
Kevin Roose and Casey Newton of the Hard Fork podcast have a new dialogue up for the New York Times Magazine, apparently as part of an AI issue. (I’ve not seen the rest of that issue, but I can tell you I’m not optimistic.) I think the conversation is a really good example of how AI maximalists go wrong, and in particular how common it is for AI maximalists to think of themselves as something other than maximalists. It’s particularly telling, to me, because Roose is someone who has as much influence as anyone in the entire elite media sphere when it comes to covering AI, and yet you can clearly tell that he sees himself as just a guy with a microphone; that is to say, he and Newton look out at a world where they “take a look at my BlueSky mentions and am reminded of the many, many skeptics out there and how differently they see the world” but seem unable to grasp that they have more influence than any ten thousand of those BlueSky accounts. You guys are aligned with a dominant public perception that AI is going to utterly transform human life, and soon!
I find their back-and-forth frustrating for…
Failure to consider historical analogs. There are many examples of technology-driven hype cycles that have failed to live up to that hype. They mention crypto, but only in passing as a way to deny the salience of the comparison. But, yes, crypto. I would also nominate the Human Genome Project and its assumed impact on healthcare as a better analog. When I was in high school, the fact that the human genome was in the process of being sequenced was treated as a matter of earth-shaking consequence, particularly in medicine. It was taken as an object of faith that how we treat disease and disorder was going to be utterly rewritten by having the ability to sequence human genomes. Articles by the dozen appeared that operated as though this simply would happen, without even considering the possibility that the returns from sequencing the genome could prove to be disappointing. Medical students were advised to carefully plan their educations for a future in which everyone had the benefit of medicines tailored to their individual genetic reality. None of that happened. The world proved to be more complicated and unpredictable than thought. Any intelligent approach to the LLM era has to integrate the possibility of history repeating itself in this way.
Conflating AI doomerism with AI skepticism. The conversation utterly fails to delineate between two very different things, AI pessimism/doomerism (worry and fear over the consequences of LLMs) and AI skepticism (a belief that the consequences of LLMs will be significantly less important or severe than popularly believed). AI doomers are among the most extreme maximalists out there, in that they insist that human life will forever change in fundamental ways thanks to AI; I will again repeat my observation that AI doomers and AI utopians are both operating under the same emotional impulses. Like AI utopianism, AI doomerism is massively represented in popular discourse and media, while genuine AI skepticism has been almost entirely written out of mainstream media. And this strikes me as particularly weird given that there’s so much confidence in this domain. If you guys are so sure that LLMs really are going to lead us to a Star Trek future, why aren’t you finding skeptics out there in academia or wherever and interviewing them? If the facts are on your side, there’s no reason not to platform that perspective.
Being vague about the actual dimensions of skepticism. Newton and Roose dismiss skepticism but aren’t particularly specific about what they’re dismissing. For example, there’s a reference to “people who don’t believe that these things are doing much more than just predicting the next word in a sequence,” but this needs to be better defined to be useful. An LLM “learns patterns and rules from massive datasets of text, enabling it to perform various tasks like answering questions, summarizing text, translating languages, and writing content,” according to Gemini. (AI-generated explanation for cheap irony purposes.) They utilize transformers, self-attention mechanisms used to analyze the relationships between words in a sequence, to generate rules based on those massive datasets. But what does that mean? It means that they’re generating complex statistical and algorithmic relationships between inputs and outputs; given a particular input, what is the output most likely to satisfy that input, based on the training set? This can be a perfectly useful method for a lot of tasks, which is why I’m not a total rejectionist about these technologies. This is not the way a human brain thinks - humans learn relationships and make decisions based on vastly less information; humans brains can think deductively as well as inductively; a human mind contains a consciousness that acts as a self-observing organizational system through which thinking understands itself - but of course the AI people always say that AI doesn’t need to think the way a human thinks. Fair enough, but certainly there are very deep questions about the practical ability of these technologies given their lack of provable self-recursive meta-cognition such as that found in the human neurological system. Isn’t that worth discussing? I guess not!
Assuming that automation is a one-way affair. Like many, Newton and Roose describe a world where AI leads to massive job losses as many positions are automated away. One might first point out that there have been many, many predictions that technology will imminently lead to massive job losses for centuries that did not come true, although it’s also true that specific groups of workers have been devastated by technological growth. That could certainly happen with LLMs, although we have to wait and see. Even setting that aside, though, there’s the question of whether automation is forever. Certainly the general trend over time tends to be in the direction of more automation, but in fact if you spend a little time researching you can find examples of de-automation, whether you think they’re good or bad for the world, the consumer, the worker…. For the record de-automation doesn’t happen because corporations are operating out of humane impulses but because the world is complicated and sometimes the advantages of human workers reassert themselves in purely practical terms. It would have been powerfully difficult to make the case that there would be a revival in paid hand washing of cars after the rise of cheap automated car washes, and yet that is a thing that happened. Because the world doesn’t always make sense.
Refusing to acknowledge that revolutions that can happen and maybe should happen often don’t happen. I made the comparison to the nuclear energy revolution that wasn’t for Persuasion. In the middle of the 20th century it would have been utterly rational to assume that we would largely be using nuclear energy to power our society today. That didn’t happen. Because… the world doesn’t always make sense.
Failure to accurately reflect the amount and direction of AI hype and the consequences of the bias. I am just bamboozled by anyone who looks at the state of the discourse and says “You know, there just isn’t enough coverage of AI that considers the most outsized potential consequences!” Go on Instagram and count the ads that reference AI. Consider the number of courses on popular online class platforms about an imminent AI-dominated future. Read The New York Times! I don’t think there’s a conspiracy of silence when it comes to skepticism, but I do think that there are social and professional dynamics which have combined to make talking about AI skepticism seem low-status in professional media. Look at what the big media podcast bros do and don’t talk about, in this domain. AI is a regular topic on Ezra Klein’s podcast, but he doesn’t have AI skeptics on. AI is a regular topic on Derek Thompson’s podcast, but he doesn’t have AI skeptics on. Ross Douthat’s first AI-themed episode was with one of the most deranged maximalists you can imagine, and nobody bats an eye. AI is a regular topic on a lot of podcasts that don’t have AI skeptics on. I don’t think that’s healthy! Part of what I find frustrating about this is that Roose and Newton are actually more likely than most to really grapple with committed AI skepticism, but still seem to act as though the burden of proof lies on those who think AI won’t totally transform the world. Again, I think that people in media and academia in particular are very afraid of looking foolish, and in this area it’s always safer to assert revolution rather than stagnation - in part because you can always just kick the can down the road and say “Wait until next year!”
Admitting that their bar is human-like performance, not extra-human performance. Roose says “The base rate that we should be comparing with is not complete factuality but the comparable smart human given the same task.” This is similar to definitions of artificial general intelligence (AGI) that say that we will know we’ve achieved it when an AI can do more or less as well as an average human being at most tasks. Which is very strange. We already have systems that can perform as well as average human beings at most tasks; they’re called… human beings. Again, this is a set of technologies that people are arguing will imminently send human beings to distant galaxies, end hunger, create a post-scarcity economy, even make death obsolete. I’m not making that stuff up! Google around! But how then can success be achieved when AI performs as well as a mere smart human being? Those things are currently entirely outside of human possibility, so if we’re just getting human performance from AI, they can’t rationally be expected to be near-future possibilities with AI. If you think that LLMs will soon become so powerful as to dramatically outperform human beings, then that’s a different definition of AGI and the exact standard Roose says is too high, isn’t it? If on the other hand you think those wild predictions are stupid, say so! But that’s participating in AI skepticism, even if it’s of a different order of magnitude as my skepticism. Which, again, gets to the sense that there’s something low-status about casting doubt on AI’s more outsized claims.
The kind of eye-rolling condescension found in Newton and Roose’s piece is very common to this space. But look at what AI is actually used for right now - making imperfect images, summarizing PDFs, writing shitty college essays, generating error-ridden automated subtitles for YouTube videos, masturbating to a somewhat-more sophisticated chatbot, saving people time on menial secretarial tasks that they then waste watching TikTok for hours - and compare it to the technologies which are disdained by the inescapable “more important than fire or electricity” claims - curing disease on mass scale with vaccines and antibiotics, utterly transforming the meaning of geographical scale with the internal combustion engine, closing the gap between people on opposite sides of the world with cheap and instant long-distance communication, turning extreme heat and cold into not just survivable but comfortable temperatures, or (to switch the valence of technological importance) creating viruses that have the potential to exterminate significant percentages of the human species…. I simply do not believe that the average human being is living a fundamentally different life than they were prior to the rise of LLMs, while electrification and modern sewer systems absolutely did change human life on a fundamental level. That’s the scale that AI maximalists insist on using, after all. And even the most potentially revolutionary applications of LLMs, like protein folding, remain much more matters of projection and prediction than they do of actual, this-is-happening-right-now reality. Some predictions don’t come true.
I look at that gap, between how the world has been transformed by technology in the past and how it’s being transformed by LLMs today, and the scope of the AI hype right now, and I feel very confident that I’m not the crazy one, that I’m not the extremist. I don’t think filling Facebook with slop for the idiot uncles of the world compares to, say, the affordances of GPS, which in turn is less revolutionary than the automotive technologies that allow you to use your GPS to go to distant places. Sorry! All I’m asking is this: those in the media who have the reach and resources to find the qualified skeptics out there should do much more to highlight those voices and make them more prominent in public discourse about AI. I appreciate that Hard Fork has had on Sayash Kappor of AI Snake Oil, for example, and wish that such appearances would inform their apprehension of the immediate future a little more. I am asking for a rebalancing, not an end to AI hype, which is not possible right now. Surely a world with AI soda is one where what’s needed is more skepticism, not less. Right?
I again invite AI maximalists to take the shitting-in-the-yard test - going one month without using LLMs and then one month without indoor plumbing, then assessing which was really harder to experience. I want to stress that I think this is an interesting challenge that someone could actually do, it would amount to a kind of highly marketable journalism that’s very popular in the world of algorithmic platforms, and it would actually have real social and informational value. I want someone to actually do it and record their experience! I concede that people in the mainstream media are unlikely to do so. So why doesn’t some ambitious YouTuber or podcaster who’s looking to break in to the big time try the challenge? It’s literally free to do it. And, if you document it on your newsletter or YouTube or TikTok or whatever, I will do whatever I can to publicize your efforts. Think of the content! I really think there’s some serious viral potential here; the aggregators would go wild for that kind of thing, I’m sure of it.
You do have to abide by some rules.
You may not use indoor plumbing in any form. This isn’t just about foregoing indoor plumbing in your home but entirely. That means that you may not avoid the difficulty by going to the bathroom in McDonald’s or whatever. You’re not just avoiding indoor plumbing in your house, you’re avoiding it anywhere. This includes the consumption of foods cooked in restaurants or at the homes of others if the use of water transported via indoor plumbing is involved, which it will be for pretty much anything you could order, so you’re probably going to be preparing pretty much all of your food for this month. No peeing at the gym or washing your hands anywhere with a spigot. Probably want to buy handwipes in bulk!
We’ll forgive the use of modern irrigation and related technologies in the production of food. Of course, if you buy food from the supermarket and prepare it yourself, you’re still taking advantage of modern plumbing in the sense that at some point the production of that food utilized these technologies, etc. But we’ll forgive that. You don’t have to worry about how your almonds got irrigated or how your lettuce was kept fresh in the supermarket. You gotta eat, after all. But if you want to boil water to cook with at home, you better get that water from somewhere other than the tap.
Water procured from a well or natural source is best, but you can also buy water from a commercial source if you can prove that no public indoor plumbing infrastructure was used in its collection, bottling, or delivery. To be kind, we’ll say that the various processes involved in the distribution of bottled water that are related to indoor plumbing are OK so long as they are only used on-site - that is, water can be extracted from a natural source, filtered and cleaned, bottled, and distributed with modern technology so long as it is not transported or processed with any kind of public/municipal/mass plumbing system. So while it’s best to find an Airbnb with a well for this exercise, you can buy a ton of gallon jugs of spring water if you need to, so long as you make a good-faith effort to ensure the above conditions are true. (Please don’t drink from a pond and die from giardia for this exercise.)
Honor the spirit of the exercise. No cheating by bathing in a public fountain, no pooping in a Port-a-Potty that’s no doubt cleaned in part with water delivered through a conventional plumbing system, no letting ice cubes made from municipal water melt…. You’re on your honor here.
YOU TAKE THE SHITTING-IN-THE-YARD-CHALLENGE AT YOUR OWN RISK. freddiedeboer.substack.com, its proprietor, the Substack corporation, and all related affiliates and subsidiaries bear no legal liability for any health consequences from attempting this challenge! If you get cholera, don’t blame me.
I don’t believe that it’s coherent to insist that the development of LLMs is more important than humans harnessing fire or electricity, or even that it ranks among the most important technological developments in human history, while being unwilling to trade access to LLMs for access to indoor plumbing. If you think that’s the wrong standard, fine - but you have to argue that way, that is, operate from the assumption that constant claims about the importance of LLMs relative to other technologies are in fact not credible. You have to call out Sundar Pichai as a charlatan, you have to accept that life will go on more or less the same way it did before, you have to stop calling LLMs extraordinary technology and instead regard them as ordinary technology - that is to say, a consequence-bearing technology that won’t fundamentally alter the basic lived existence of the vast majority of people. You may, again, contrast that with the development of HVAC systems, passenger aircraft, modern sanitary conditions in hospitals, or, like, the window. I could be wrong, you know. Maybe we’re about to witness THE RISE OF THE MACHINES. I don’t think so, though, But if you’re someone who thinks that AI is a development on the scale of human control of fire or electricity, you can put your money with your mouth is, for free, and if you’re any kind of content creator, I think your efforts could easily go viral. So come on. Call my bluff.



Freddie, speaking as a scientist who has worked in deep tech for climate.
1. It’s annoying to see the media fall into VC and corporate hype cycles with such breathlessness. The VC’s are talking their own book! How dumb do you need to be to not see that. I do hope someone takes your challenge
2. The Human Genome Project was an enormous milestone. sequencing technology is steadily transforming medicine - it’s just a rather slow process because biology is incredibly complicated an expensive to study.
3. This gets to a core issue with how the tech-hypers talk - they need to say it is transformative TOMORROW rather than admitting that real tech takes decades to make impact and consists of a series of small innovations. They also appear to never have heard of an S-curve, because Silicon Valley despises the idea that the past can teach us anything. (This their love affair with neo-fascist ideologies)
4. LLMs is impacting software engineering rapidly. Those who don’t see that are either ill-informed or delusional. It is impacting hiring today, as a vast amount of Software engineering is writing rote patterns - a uniquely perfect fit for LLMs. The AI hypers come from software and so see this and extrapolate it to ALL FIELDS OF SCIENCE AND TECHNOLOGY without actually understanding them. Because they’re idiots operating from cognitive bias.
5. Or they’re suffering from AI psychosis. It turns out people will sometimes obsessively speak to sycophantic LLMs that mirror them and create a cognitive bubble for their delusions. I have at least one friend who is like this, six hours a day of AI feeding into his personal crackpot theories of archetypes of humans. This is all rather pathetic.
I want to argue against your general position, which you've also written about previously, regarding the relative importance of civilizational advancements.
I grew up in rural China where we shit in a brick privy, which the household, me as a young child included, took turns shoveling out. Water was from a hand pump from a well and stored in big ceramic above-ground cisterns. Food was cooked with wood or coal fires, and we had heating from coal briquettes. There was no electric lighting. The village head's house had the only accessible telephone. A trip to the nearest town or city, "real" civilization as I'd consider it now, was an exceptional event. If you didn't know someone of means, even your options for books were limited to what was incidentally available. My world did not extend far beyond what I could see.
I'm not making a value judgement as to which lifestyle is "better" in any sense. It's very reasonable to argue that the country farmer of old lived a happier, more fulfilling life according to his own sensibilities. But ask what you are trying to compare. If I never had plumbing and similar comforts, it would definitely be unpleasant, and a lot more of my time would be taken up with the work of maintaining the basics of life, but I'd still recognizably be me, except I'd be shitting in a hole in the ground instead of a toilet. If I never had whatever we want to call the advancements of the information age, I wouldn't have my principles, thoughts, or any of the things that I consider integral to my mind, my person. It's not all sunshine and roses but those things are mine in the truest sense - material circumstances cannot take them away and I would not swap them for another set of knowledge and experiences that no longer form me.
You can ask many more things like this. Would you rather be one of the many people on the wrong side of a higher mortality statistic or give up your self? Would you rather be pockmarked from smallpox or give up your self? Would you give up your fundamental sense of being and self, even if you were faced with death?
Though it was for a short time as a child, I experienced not having each of the two groups of technology you've spoken about. I disagree with your assessment of which matters more.