89 Comments
User's avatar
ThePossum  πŸ‡¬πŸ‡§'s avatar

"I think Douthat is entirely aware of how his role at the NYT ultimately fails both his very real root conservatism and his revulsion towards the excess and ugliness of the modern American right-wing, but he’s content to work in that space as an expression of his longing for a different, better American politics."

I guarantee you that no one on the right thinks Ross Douthat is right wing. Uniparty member, sure, but not right wing.

Eh, Not Worth The Trouble's avatar

That you're saying he's not right wing...well, then, you're basically flipping the whole "liberals/progressives/etc. aren't left wing" argument. And that's hilarious.

ThePossum  πŸ‡¬πŸ‡§'s avatar

No, actually I'm not doing that. But thanks for playing.

Eh, Not Worth The Trouble's avatar

No, no, no, you are. Don't try to pass it off with a vague pithy remark, because it only makes it worse.

ThePossum  πŸ‡¬πŸ‡§'s avatar

Oh noes! Makes it worse! For you, maybe.

Eh, Not Worth The Trouble's avatar

So what you're saying is that you're just stuck up and unwilling to admit the possibility that the right can be just as purity-driven as the left at times.

Because, really, nobody on the left uses the "uniparty" argument at all. That is entirely a right-wing version of "well, *actually,* he's not one of us."

ThePossum  πŸ‡¬πŸ‡§'s avatar

Political labels aside, Ross Douthat is a person that believes in a breadth and depth of government interference locally and globally that is counter to those of us who want minimal government interference. Of course there are people β€œon the right" who share Douthat’s position; we call them RINOs, globalists, members of the uniparty, etc.

No one passes a purity test, I'll give you that, but to claim that Douthat is in any way right wing is laughable.

ThePossum  πŸ‡¬πŸ‡§'s avatar

Just following up here.

I figured you would be the type of person who gets all bent out of shape and starts on the just asking questions slandering trajectory, and then refuses to reply when someone actually responds seriously to your comment.

Thanks for confirming my bias against shallow, humorless scolds.

David Roberts's avatar

Well stated. The transformative thing speaks for itself.

Bill's avatar

I asked ChatGPT what it thought of this statement "however human thinking works, it does not work by ingesting impossibly large datasets, generating immense statistically associative relationship patterns and probabilities, and then spitting out responses that are generated one token at the time, so that we don’t know what the last word in a sentence (or the third or fifth) will be while we’re saying the first." Here is the reply (it kept using "we" to refer to humans. I changed that to "they" for clarity.).

I think de Boer is doing two things at once there: making a fair negative point about current LLMs, and then smuggling in a much strongerβ€”and much shakierβ€”positive claim about human cognition.

The fair part first.

He’s right that humans don’t learn the way LLMs do. [They] don’t ingest web-scale corpora, optimize billions of parameters, or build competence by gradient descent over token sequences. And [they] don’t experience [their] own thinking as token-by-token sampling from a probability distribution. So if the claim is β€œLLMs are not literally operating the way human brains operate,” that’s trivially true and not very interesting.

Where it goes wrong is the move from mechanism to phenomenology. De Boer leans hard on the introspective intuition that β€œwe don’t know what the last word will be while we’re saying the first,” as if that decisively separates human thought from probabilistic sequence generation. But introspection is a very weak guide here.

A few points that undermine the contrast:

1. Speech planning is incremental.

Psycholinguistics has shown for decades that humans plan utterances only a little aheadβ€”sometimes just a word or two. [They] don’t generally have full sentences pre-formed before [they] start speaking. In that sense, humans actually look more like token-by-token generators than de Boer wants to admit.

2. Statistical learning is central to human cognition.

Humans are extraordinarily sensitive to statistical regularitiesβ€”phonotactics, syntax, semantic associations, social cuesβ€”learned from large (though not web-scale) datasets of experience. The scale differs, but the kind of learning is not alien.

3. β€œKnowing the end in advance” is not the hallmark of intelligence.

Mathematical thoughtβ€”something you know wellβ€”often proceeds without knowing where a proof will land. You begin an argument, follow local constraints, and only later see what the final form must be. That doesn’t make the process unintelligent or merely associative.

4. Token-by-token generation is an implementation detail, not a cognitive thesis.

LLMs generate text one token at a time because that’s how the architecture works, not because intelligence is token-by-token guessing. The same behavior could, in principle, arise from very different internal representations (and likely does in humans).

So I’d say: de Boer is right to resist the mystical inflation of LLMs into β€œdigital minds,” but he overplays the disanalogy by treating human thought as more teleological, unified, and pre-scripted than evidence supports.

There’s also a subtle rhetorical move in his phrasing: by emphasizing β€œimpossibly large datasets” and β€œimmense statistically associative relationship patterns,” he’s cueing the reader to hear β€œmere statistics” versus β€œreal thinking.” But that framing ignores the possibilityβ€”uncomfortable for a lot of peopleβ€”that real thinking might itself be deeply statistical, just embodied, goal-directed, and constrained in ways current models are not.

If I had to compress my take:

LLMs don’t think like humansβ€”but neither do humans think the way we feel like we think.

Sister Trout's avatar

So you seriously just copy and pasted a GPT reply, about this topic, without putting any of your own brain juice into the mix, and got likes for it.

Bill's avatar

Oh, I've put plenty of brain juice into this question. Did you read ChatGPT's reply? I think it's worth engaging with.

Sister Trout's avatar

No, I come to comments sections to engage with flawed, messy, sometimes hilarious humans. Tell me what you think, Bill.

Bill's avatar
12hEdited

Sure. I think the statement of Freddie's that I gave to ChatGPT is typical of AI anti-apologists (is that what you call them?). Calling something a "stochastic parrot" or a "next token predictor" is thought to be an argument that AIs cannot possibly be doing something like human thinking. And sure, the way an AI works is probably very different from the way the human brain works (although read ChatGPT's points on that if you can bear to). But for all the mocking of mysterian AI apologists like Ross Douthat, which I whole-heartedly applaud, I think the position that human thinking is some special thing that a machine can't possibly do is equally mysterian. In the end it's a religious belief. As someone who believes that we are nothing but the products of evolution, I have to accept the consequence that our consciousness, our felt awareness of our own existence, is also a product of evolution, and therefore lives in the physical architecture of our brains. It follows that there is no reason to believe it couldn't exist in animals or machines. Just to be clear, I'm not saying that I believe AIs are conscious. I am saying that I don't know, and neither does Freddie.

By the way, AIs are also on occasion flawed, messy, and hilarious. I enjoy chatting with them for that reason.

Philippe Saner's avatar

"the way an AI works is probably very different from the way the human brain works"

There's no probably. We can see the differences. Particularly since, well, we made the LLMs. They're not very mysterious.

I actually do believe that we'll see mechanical people one day, but LLMs are not that and can't ever be that. It'll take another breakthrough - or ten.

BronxZooCobra's avatar

"They're not very mysterious."

? The people who designed them don't really understand how they work.

ronetc's avatar

Maybe the likes were for the self-written and wryly-interesting last sentence.

Sister Trout's avatar

Well that's embarrassing, lol. I offer the most sincere form of midwestern apology, ope.

Ethan Cordray's avatar

As usual, here's a demonstration that the main product of LLMs is a bunch of text that nobody will read.

sjellic2's avatar

I find the reactions to moltbook incredibly bizarre. The training set for these LLM's is the corpus of text on the internet, a huge body of which comes from reddit. Recreating reddit patter in infinite quantity is the most basic, trivial function for this kind of technology. How do people not (pun intended) grok that?

Alcibiades's avatar

What I grok is that almost all of humanity is capable of little more than Reddit level patter. And now AI is better at that too.

One of the criticisms of LLMs was always that humans uniquely benefited from networks effects, society, etc. Much of what made us special existed in institutions, in the connections between us.

Seems like that theory could quickly fall apart now.

TheOtherKC's avatar

> If this really is the time of the machine god, the machine god will assert itself the way a god can and no one will have to argue for its divinity. That’s kind of the whole point of being a god. Right?

As much as I agree with the heart of this essay, I think I can partially reply for our Adeptus Mechanicus friends. Living in a world where a god is coming, what truly matters is what one does *before* its reality is undeniable. One must curry favor and show faith when others doubt, because when the machine god "asserts itself in the way a god can", it's already too late. When old YHWH descends from the clouds, he will have already decided who shall be at his left hand and who shall be at his right. When the machine god asserts itself, it will already be too late to guide its moral and ethical development (or earn favor); it will have already decided mankind's fate, and if mankind has not taught it well, they shall be destroyed.

...no, I don't believe any of what I'm writing, but I've always been fascinated by these kinds of belief systems, in an amateur way, and think my argument is a fair summary of how this lot are thinking.

RL's avatar

Damn fine writing.

James K.'s avatar

I have no particular knowledge of what is coming but your argument is flawed in a way that seems really obvious: why if we are just in the early stages?

The automobile absolutely did not insist on its own transformative power in 1880. It took decades before it became clear it was going to revolutionize personal transport. Netflix’s vision of streaming, which they had all the way back in 2000 or so, did not seem like it was going to upend the entire movie/TV model while I was still a 14 year old buying the DVD for Idle Hands. The refrigerator had serious competition from the ice box and general inertia.

You may be 100% right about AI’s limitations and future, Freddie. But it’s weird to say β€œThis tech isn’t going to be transformative or revolutionary because it’s not right now! DUNK!”

Freddie deBoer's avatar

This is a breathtaking example of what should be a brutal criticism and representing it as a defense. "This might happen, we don't know" is not proof of anything! You can make that claim about literally anything! We might be at the beginning of the age of alchemy, where we learn to magically transmute lead into gold. But if you take that completely evidence-free supposition and then use it to transform the American economy, you're doing something rather ill-advised, wouldn't you say?

James K.'s avatar

Ok but we don't have a philosopher's stone, right (other than a delightful British novel that you may have heard of) so that's pretty silly as a comparison. If someone invents a philosopher's stone that *right now* only has 15% efficiency in lead to gold conversion, it's fair to speculate if in the future it might might gold mining obsolete, right?

That's why my comparisons were to things that actually existed, but were in their infancy. The car being the perfect example. There was nothing transformative about 1880s automobiles. And there were people who said it was a useless bauble that could never compete with horses and trains. And again, I'm not saying you're wrong. AI/LLMs might be, to keep the transportation lens, the Segway. The Segway was hyped as a revolution in personal transport and that obviously did not happen.

But LLMs have been mainstream for like 2-3 years. It is very weird to so arrogantly and dismissively claim that they are not transformative yet and therefore they never will be. I started using Facebook in 2004. Its powers to shape media, opinion, and elections were not clear in 2007!

Philippe Saner's avatar

You can always play what-if. What if a certain off-label neutropic is gonna revolutionize society by making us all geniuses? You can't prove it won't!

To claim that something's gonna be revolutionary, you should make that argument explicitly.

James K.'s avatar

Surely you see the paradox - the counter argument is similarly limited because by that logic you can’t prove it WON’T be revolutionary.

That’s what makes FdB’z argument so weird. He’s really mad at people for forecasting that AI will be a game changer because it isn’t NOW. Hence my argument about cars in 1880

Philippe Saner's avatar

Would you be willing to accept similar logic for the potentially revolutionary nameless smart-drug?

James K.'s avatar

If the drug actually existed and was already capable of doing something new, sure.

- Cars in 1880 existed and had internal combustion engines.

- AI in 2026 is already such that I do not assign any take-home essays to my students anymore because "What is the appropriate balance of power between the executive and legislative branches in foreign affairs" can be answered by an LLM in two seconds.

So in both cases, we HAVE the technology. So this isn't alchemy or teleportation. But it's very new. So reasonable people can differ on whether AI will change the world, or be the equivalent of a better Google. What's bizarre to me is FdB mocking anyone who thinks the former.

So if a new pharma product has shown that it can drastically improve Algernon's IQ, then yeah, it's fair to speculate about future application. Why is that even controversial?

Jeff Dewey's avatar

Freddie did not say never. He said it might, but that he is sick of hearing bout great it may be some day, and pointing to the current AI achievements as transformative. Me too.

Liam's avatar

I think if we're going to accept that maybe we're in the early stages of an AI takeover as an argument, you're also going to have to give equal credence to the idea that God Himself will soon appear to scatter our languages as a punishment to our hubris. At least there's some evidence, in terms of testimony, that this has happened before; there is considerably less evidence for the singularity.

James K.'s avatar

The invention of the internet would seem like a pretty logical parallel right? It really doesn't seem that odd to say that AI might change humanity, if not on the level of the printing press, on the level of 1990 digital technology to 2005 digital technology, right?

Liam's avatar

It's not odd to say that anything *might* change humanity, though. What's odd is the conviction that this development right here is it, with no real evidence. So far what we've got is a thing that is very good at making writing that no one reads anyway, for bureaucratic purposes; making that sort of writing more efficient is not a particularly noteworthy change to the human condition, as we already generate that sort of text all the time in our society.

So far it has failed to produce any meaningful results. You can make a not particularly great movie, in 10 second increments. So can a kid with a super 8 camera and a couple of friends. You can make pornography, again not particularly great pornography. So can anyone with access to a camera and some unhappy people to help make it. You can make text that says nothing, according to a template; millions of people are employed to do this already. So I guess we can fire them? Only, we could already fire them, Elon fired ten thousand of them at Twitter with no real consequences, because their work output is not why they were employed in the first place. The reasons they are employed still remain just as valid, post-AI revolution.

There are a lot of things AI might come to do in the future, but again I could go to bed tonight and see a vision of a flaming sword and my armies might sweep across the nations of the world to unify them under my iron fist. This is, to my mind, very unlikely, yet it remains the more likely possibility; multiple people have done it historically, while AI has yet to do any of the cool shit being promised.

James K.'s avatar

*Shrug*. The lightbulb in 1840. Cars in 1880. Fridges in 1920. The internet in 1995. Social media in 2006.

We're like 3 years into the mainstream existence of AI/LLMs. I'll refrain from mocking people who forecast that it will be a big deal in our lives based on 3 years of data.

Liam's avatar

AI is responsible for none of these things, dude. Also, each of them solves a specific problem on day 1. "Your food goes bad slower when it's cold. What if we could keep food cold?" is a realistic problem to solve. "It's really dark when the big fire in the sky goes away, and there arent enough whales to keep all the lamps lit. what if we used bamboo filaments in a vacuum?" was lighting the darkness the day Edison built the first one.

"What if we made a smart machine that solved all the problems and could do anything, man?" is obviously different. Again, what if God Pantokrator came down and ended hunger? Make any case at all for why this is a worse bet than AI.

episodenull's avatar

Unfortunately, the scale with which AI can do all of that mediocrity is what will have the greatest effect on our world. My wife already watches AI cat slop on YouTube, the think of things where cats in clothes go buy shoes or whatever; she knows it's AI and, I have to admit, some of it is clever and cute. But those types of videos a.) siphon money to the people to generate and post it, b.) crowd out non-AI content because they can make so much of it so much faster and c.) condition people to reward content that's more fantastical or outrageous than reality. How can an amateur filmmaker with an iPhone compete?

PRZ's avatar

Agree with everything about what LLMs really are and their not gonna become "intelligent."

You linked that book on the printing press, I read it, and sure seems your take on AI seems similar to takes on the printing press. It's just propagating false information, etc. My guess is the AI transformation has nothing at all to do with how human it's going to become (as you correctly argue) but because it will provide more humans with more information more efficiently like the printing press. The downstream effects, new medical tech, etc. are impossible to predict but if we use the history of the printing press as a guide I think we will see a lot of change.

coords1306's avatar

I'm not so sure we know enough of how we do our own thinking to dismiss the LLMs do as mere pattern matching, and so less valuable. Pattern matching seems an important part in what we do as well. I read somewhere that we mostly learn my imitation more than any other method. Seeing an external pattern and trying to match it. I personally feel AI has given a boost to my own work and am surprised when I go from asking a technical question for implementing something, to its general thoughts on the content of an email I'm sending to a co-worker, that it had a pretty good sense of how it would be received and painted an accurate picture of the issues we would have at a company like ours between developers and our writers. In this example the limiting factor on AI isn't the AI, it was in how I choose to use it. To help me write code, or to compose thoughts and communicate with my co-workers, on take a step back and consider the large view priorities of what I'm working on.

Sister Trout's avatar

I fell for a bait and switch job ad on LinkedIn that turned out to be a funnel for low wage AI training. I tried it for about a month. I'm unemployed and running out of savings.

If you can read this, everything you've ever put on the internet - words, images, video - has been used to train an AI. There are no guardrails on what trainers can or can't use to train models, and you have no recourse or way to opt out. Some of the tasks I was asked to do were benign, but many (that I didn't do) asked for photos and video from personal Instagram and Facebook accounts. Some tasks asked to find the same person across multiple years, events, or websites. I think there were 50k active trainers on the project I worked on, and it's several years old. That project is one of dozens. If that doesn't alarm you, I don't know what to tell ya. The AI isn't going to kill us, the companies building AI might, though.

That's what I'd like to see the media reporting on.

Patrizia's avatar

Beneath the Ars Techica piece that introduced me to Moltbook, one commenter wrote:

"People don't understand the difference between 'instructing a bot to share opinions, thoughts, wants, etc' and 'instructing a bot to act like it's sharing opinions, thoughts, wants, etc.' The distinction is subtle but important."

So succinct. I agree!

There's no consensually agreed-upon definition of consciousness. Some researchers seem to believe consciousness is some kind of neurologically processed distillation of subjective experience, while others believe it's the product of a functionally very complex relay system with a lot of processing nodes. But even if it is that second one, there's no corollary that ALL such functionally complex systems will inevitably develop consciousness.

Of course, the Singularity does not depend upon AI consciousness. Just AI intelligence.

>>Big announcement coming tomorrow morning<<

You, sir, are a tease! πŸ˜€

Vlad the Inhaler's avatar

The part of this discourse that drives me nuts is the assumption of constant, linear, geometric progress: "if AI can already do this, who knows what it will be able to do in 5 years!" It's also frequently encountered when discussing self-driving cars: "if Waymo is already operating a taxi service in San Francisco, just imagine what they'll be able to do in 5 years!"

The answer to the latter, IMO, is "Probably nothing more impressive than expanding their service to San Jose or LA or other places that don't pose the weather problems they've been trying to solve for a decade and don't seem particularly close to cracking." Similarly, with these LLMs, I've been convinced by the Gary Marcus case that (1) the only potential for improvement is with ever-increasing scaling, and (2) scaling is not going to solve, indeed cannot solve, these models' current limitations.

History is full of rapid technological advances that hit a hard ceiling. The advancement in human spaceflight between 1955 and 1969 was astonishing, and if we'd kept improving at the same tempo, I'd be typing this from an Earth colony on Europa. And yet.

mm's avatar

Yes, and I thought long and hard on how to make some money on the fact that AI fools will spend $2.5T on this shit in 2026. I ended up just moving my investments *away* from the AI sector. I don't have Michael Burry money to short it.

Pete's avatar

I like Ross Douthat in general and enjoy his columns, even and especially when we disagree, which is most of the time. But the one recent development that makes me gag is when he writes about the tech right, JD Vance, Trump 2 as if they were birthing a bold new intellectual project, some sort of original synthesis of Christian traditionalism with…AI, I guess, that will lift us out of the decadent doldrums we find ourselves in. This seemed preposterous to me before Trump 2’s political agenda was definitively revealed as tariffs, deportations, and management of personal grievances, but I don’t claim any special powers of insight here, as anyone who followed the news in 2024 could have told you that this was what Trump 2 would entail.

I think the AI column is actually inexcusably terrible. The central concept that we don’t know how big of a deal this will be yet, but it might end up being the biggest deal of all, is fine. I guess. Some probabilities would be nice but that’s not really Douthat’s vibe. But to then tell the audience of the Times to pay less attention to the news, less attention to ICE’s abuses, less attention to Trump’s threats of running for a third term or messing with the implementation of the midterms…well, that simply does not follow. It is borderline shocking to ask your fellow citizens not to care so much about the infringements happening every day to their liberties because the machine god is coming soon. To me this comes off like a dressed up version of the apocalyptic preacher weirdo outside my subway stop.

Maybe the machine god is coming, maybe he isn’t. I have no insight here. But if there is a small chance he doesn’t come, we should have a think about what our country is going to look like in the long term. What a stupid article

Leo Marcel Schuman's avatar

Well, that was refreshing.

BronxZooCobra's avatar

The question I always ask is - how do you think your brain works? It takes in electrochemical signals and those signals flow to neurons and the neurons sum the total of the inputs they receive and if the total charge exceeds a given threshold they release an electrical charge onto the next neurons. It's an entirely physical electrochemical process.

I haven't met anyone who talks like Freddie about AI who has any concept of how their own consciousness manifests itself in the electrochemical computer inside their skull.

I get the sense that when Freddie reads the word Freddie he's not grasping how the processing occurs. The rods and cones in the back of his eyes are firing signals down the optic nerve and the neurons begin processing those signals. None of those neurons have any idea about anything they are just taking in signals and if the input signal is great enough they fire and the processing continues. At no point do any of those neurons know anything other than if the total input charge is greater than X - fire.

ronetc's avatar

My brian thinks he is the brother of Jesus.

Jeff Dewey's avatar

Sounds like your left brain has co-opted your right. It’s an adroit analysis that leaves out the right brain, and consciousness, for which electrochemical models fail. See Christian Wiman’s essay in Harper’s Dec 2025 for en excellent exegesis on this.

BronxZooCobra's avatar

So there is a ghost in the machine? If that's your mental model that I can see why you'd think an AI can't think. No ghost.

That said "there must be a ghost in the machine" thinking seems universal on the AI's can't think side.

Jeff Dewey's avatar

That indeed is my mental model.

BronxZooCobra's avatar

As in it's an unknowable metaphysical process? Does the brain act as some kind of interface with some kind of soul in which consciousness resides in a world outside of physical reality?

Jeff Dewey's avatar

Well, you coming closer to describing my belief and experience. I would not say unknowable. The brain is one means of apprehending consciousness which resides in a world beyond- not outside of- physical reality.

A central tenet of the article to which I alluded is that the left brain logic has become the preeminent measure of "intellgence" and that AI's logic is built entirely on left brain logic. But left brain is much less than the whole of human intelligence. By itself, left brain could never govern human beings on its own.

BronxZooCobra's avatar

Christian Wiman is the "the Clement-Muehl Professor of Communication Arts at Yale Divinity School and the Yale Institute of Sacred Music." Take from that what you will. One could imagine why he needs to believe in a ghost in the machine.

He certainly has a dog in this fight. If there is no ghost then the need for a Divinity School is greatly reduced.

Gym+Fritz's avatar

What you just wrote is akin to saying maybe the emperor, when and if he gets here, will not have the kind of wardrobe we expect / want him to have. There’s a certain type of groupthink that self-generates around momentous expectations that, by now, we should be very aware of. Not our first rodeo; part of our human nature.

No one understands consciousness*. A Turing test doesn’t prove consciousness. No one knows with certainty how dependent LLMs are of the humans that created them, directly or indirectly..

I, personally, don’t think there can human level cognition without emotions and real-world sensory input; if an LLM achieves sentience, will it be similar to that or a dog, a snake, a virus, or a clam?

* Penrose posited a tie to quantum mechanics