272 Comments
author

The recurring question: why are so many otherwise reasonable people so emotionally invested in the idea that we've "invented fire" again?

Expand full comment

I don't necessarily agree with the AI hype, but this does not show an actual understanding of how LLMs work.

It is true that LLMs are *trained* on a vast corpus of text, but when an LLM is completing prompts, it does not have direct access to any of that corpus. We don't know the size of GPT-4, but GPT-3 is only about 800GB in size.

GPT-3 is therefore NOT just looking up relevant text samples and performing statistical analysis - it does not have access to all that training data when it is completing prompts. Instead, it has to somehow compress the information contained in a vast training corpus into a relatively tiny neural net, which is then used to respond to prompts. Realistically, the only way to compress information at that level is to build powerful abstractions, i.e. a theory of the world.

Now, the theories that GPT comes up with are not really going to be theories of the physical world, because it has zero exposure to the physical world. They're probably more like theories about how human language works. But whatever is going on in there has to be much richer than simple statistical analysis, because there simply isn't enough space in the neural net to store more than a tiny fraction of the training corpus.

Expand full comment

Counterpoint: a LLM might know that almonds aren’t legumes :)

Expand full comment

Most experts (even formerly sober ones) are genuinely shocked, they simply thought an approach this dumb could never possibly do what it does -- who cares whether it thinks or not. You shouldn't be able to train a language model to be a commercially-viable coding assistant that can also sometimes pass the Turing Test, but that's what we actually have.

People are maybe overcorrecting given how wrong almost everyone was. I think there is evidence for at least some rudimentary reasoning/syntactic transformations that go beyond memorization, and it's plausible that more of this will emerge if future models grow in size. (For instance you can also take literally the exact same neural networks and algorithms used for ChatGPT, and train them to play Atari games.)

I think it's an urgent question how much these things are just collaging stuff they've memorized, vs. how much actual reasoning goes on internally -- there is nascent research on trying to crack them open and figure that out.

Expand full comment

I think the airplane and eagle analogy is very apropos, but I don't think you followed it enough. There's a difference between "humanlike intelligence" and "intelligence which is as smart (or smarter) than humans". I agree that LLMs reason in a way which is very different from what humans do. The question, though, is whether sufficiently advanced LLMs can be smarter than us, in the sense of better understanding how to create new things and manipulate the physical world. An airplane doesn't fly like an eagle, but it flies better than an eagle, in the sense of being able to fly faster carrying more things. I have no idea whether LLMs will be able to reach that level; frankly, I have my doubts. Given that that's the question, however, the amount of stuff they're getting right does seem to be the relevant metric.

As a side note, I feel like consciousness is a red herring. I don't think we can ever confirm whether anyone or anything outside of our own self experiences consciousness, and I don't think it's a prerequisite for intelligence.

Expand full comment

I think two things can be true simultaneously: that most people do not understand what these LLMs are doing at a high level (witness the handwringing over the "evil AI"; if you talk to a predictive LLM like it's Skynet, it's going to start talking to you like it's Skynet) and that the debate over whether they are "thinking" is a distraction of the very real societal changes these models are going to unleash with no thought by the techbros as to whether or not they should be doing this.

Expand full comment

I wish we’d just go back to a crypto hype cycle again so the focus could be straightforwardly trivial. So much oxygen in the room sucked out by people thinking they’ve built either God or Skynet or both. There’s some fairly straightforward areas that are going to be impacted by these advancements, and other more debatable ones, but hardly anyone is having a real conversation about any of it.

Expand full comment

“There is no shame, at all, in admitting that we are not yet at the level of understanding this stuff, let alone replicating it.”

But this is exactly why the reactions to LLMs are so unhinged. Because our society as a whole has started to assume that humans are just meat sacks operated by computers, and that we completely understand how those meat sacks work. If you think that your brain is like a computer (instead of vice versa, or not really alike at all) and that your body is just wetware to be tinkered with and adapted, there is no room in your cosmology for wondering about what consciousness *is*. And if we admit that we don’t understand ourselves and how we work, then we have to admit doubt about the choices we are making and our direction of travel. And, frankly, that sounds like something that would gum up the works of the great machine of capitalism.

Expand full comment

Many of the comments here and elsewhere still insist that some kind of reasoning is going on in these LLMs. I think this insistence comes from three impulses.

One is intellectual. Some people think, if consciousness comes from the brain, and all we know about that organ is that individual dumb neurons send biochemical signals to each other, how does all that mass of dumbness turn into sentience? The only answer we currently have is that there must be some kind of emergent network effect, that unintelligent operations multiplied by the billions can create something qualitatively different. Folks wonder if maybe the same is true with LMs when they become sufficient L.

The other two are emotional, wishful or afraid. We are a lonely species, with no one else to talk to. Our dogs can only bark at us, the dolphins and whales serenade each other. We long wistfully for some new kind of being to befriend. But we also fear such a being. This is partly the result of evolution--humans are essentially intelligent prey in the ecosystem that evolved us, and survival demands that any novelty must at first be feared. We are also scared because we feel guilt and thus project ourselves onto the LLMs. We humans hold dominion over the earth and its other creatures because we are selfish and because we can. So, if some entity arises that is more intelligent and therefore more powerful than us, then we imagine that of course it will seek to dominate and possibly destroy. It's what we would do, isn't it?

Fortunately, Freddie is right. There isn't any there there that can become sentient. If network effects in computers could create consciousness just because of size, we would be already be bowing before our overlord, the internet. And one thing we can all agree on is that the internet is immensely stupid.

Expand full comment

The consciousness question continues to be a red herring. We have no test for consciousness nor could we ever hope to.

Expand full comment

To me the AI hype is a religious revival of some sort. The word "God" has been replaced with "AI". It is whatever you, the individual, want it to be. This appeal to universality is found in all religions. Like deBoer, I remain dubious. All other attempts to replace God with whatever the fuck have spectacularly failed. This will too for the same reason. To a person from 3000 ago a cellphone is the very definition of AI, something to stare in awe at. To us, it's a tool for our daily lives. In 3000 years from now, what to us would be considered magic will be a daily fact in people's lives. People themselves do not get replaced. We are not going extinct in 100 years, bitches need to chill!

Expand full comment

What most AI enthusiasts miss is that LLMs don't actually "understand" anything in the way that human's "understand'' contextual and subtextual meanings to words and phrases. Nor are the LLMs close to achieving that kind of understanding anytime soon. While ChaptGPT can appear to "understand" it's not actually creating the resulting texts, it's an amalgamation of words that appear relational. Much like Dall-E's image fails. At this point in time all of these LLMs are digital Amelia Bedelias.

Expand full comment

These conversations are so exhausting because it feels like so many slip into hyperbolic nonsense. YES LLMs are interesting and potentially useful. NO this does not mean some climatic scene change to the third act of the movie.

Enjoyed the inefficiency point here as well.

Expand full comment
Apr 10, 2023·edited Apr 10, 2023

Can AI imagine? Can it speculate? Can it hypothesize, and then test, when results turn out wildly different from the way its models predicted? Can it distinguish signal from noise, recognize the smell of a familiar cat even though the smell is somewhat different from the last time it smelled her? Can it pick out the important information from that mix of smells, is it more important that the big blotchy she-cat is back after several weeks, is it her estrous cycle or her fear that is more acute right now and where is the nearest tree?

These are honest questions.

Expand full comment

I'm a embedded software engineer and I've been using chatGPT to try and help me in my job for the last 3 or 4 weeks and, while it can sometimes do some good stuff, for the most part I'm fairly unimpressed. It's very impressive that it can figure out what I'm asking for regardless of how stupidly I ask but even then it will give me code that is wrong or unusable or not applicable. It's especially bad if you're asking it about some programming that isn't common. (like some specialized embedded code). If I'm just asking for some common javascript or html or a very specific thing, it does a good job and is really helpful. Otherwise, it's a very flawed tool. I'm not worried about it taking over my job and I'm certainly not worried about AI taking over the planet and killing us all.

There are a lot of really smart people who think more highly of AI than me so I keep wondering what I'm missing.

I did just use it to explain "immanentizing of the eschaton". Hopefully it isn't lying to me. (it would be great if it responded with 'I am the first step, strap in, human').

Expand full comment
Apr 10, 2023·edited Apr 10, 2023

“Does that sound remotely plausible to you?”

Yes. What you’re demonstrating is a distinct lack of understanding of how the brain works and how neural networks were developed to mimic how to brains works to solve problems computers struggled mightily with.

Does anyone have a good tutorial to bring Freddie up to speed?

Expand full comment