188 Comments
Comment deleted
Expand full comment

I just asked GPT, and it gave the right answer for the wrong reasons:

The ball broke the table because it was made of concrete.

The ball broke the table because it was made of cardboard.

Me:

What does "it" refer to in the above two sentences?

GPT:

In the first sentence, "it" refers to the ball, since the ball is the subject of the first clause ("The ball broke the table") and is the most recent singular noun preceding the pronoun "it".

In the second sentence, "it" refers to the table, since the table is the object of the second clause ("...because it was made of cardboard") and is the most recent singular noun preceding the pronoun "it".

Expand full comment

We had a brief exchange on a previous thread but I find this one interesting because it’s the most fundamental disagreement I have with you, and with many other thinkers I respect deeply (and I think that about yourself) including Steven Pinker, Chomsky, Dennett, and many others. It would take at least a full essay, maybe even a damn book to lay out why I went from ‘AI is a joke’ to ‘holy shit we’re all replaceable’ after I first worked with machine learning in 2016, but I’ll throw one point out here.

Whether AI is conscious or sentient is at its roots a philosophical discussion, a fascinating one, but as a concrete impact to humans, sort of irrelevant. It doesn’t matter if it’s conscious, it matters if it’s competent. Get Stable Diffusion and a newer modelset, see the deepfake capabilities and ask how much you can trust the images you see on the internet. GPT-4 makes ChatGPT looks like a joke, and ChatGPT could easily replicate every opinion article written by a boilerplate left-liberal opinion writer, as long as you know prompt engineering. I’m always cheerful for discussions on this, even though I axed my plans for philosophy postgrad to go back into cybersecurity the intstant I realized the disaster this would cause the humanities.

Also, Eliezer has completely lost the plot. He’s had good points in his time about logistically how an AI could jump an air gap via deception, but in his 5,000 reasons for how something more intelligent could kill us all, in 10 years I haven’t seen him give a sensible reasons on exactly Why it would kill us all. The interviews with Sam Altman then him with Lex Fridman are worth watching, one can see Eliezer is truly losing it.

Expand full comment

One aspect of being human is being an “incredibly sophisticated autocomplete”.

Expand full comment

Perhaps I'd have slightly more sympathy for Yudkowsky if he hadn't spent the past decade and a half persuading people previously interested in finding effective treatments for diseases and reducing global poverty that they should instead be researching the very AI systems he now thinks should be destroyed by airstrikes.

But instead of a reasonable conversation about likely risks informed by researchers who understand the current impacts of AI, we're debating how seriously we should take the night terrors of a bunch of people who read pseudo-Bayesian Harry Potter fan fiction. We are indeed well past the realm of reasonable discussion of any of this.

Expand full comment
founding

For the technology to be any good, I think we need to combine learning with human input.

Recently, I tried using AI to generate an image of a bowling ball. It doesn’t understand the number or placement of the holes, probably because photos of real bowling balls appear to have 0, 1, 2, or 3 depending on the angle. So it gives me balls with one big hole, or 4+ holes in a random pattern.

I just want to tell it “there are always three holes.” Just like I want to tell it “Humans never have more than two arms. Literally never. Maybe you thought you saw it, but that 3rd arm belonged to the person’s friend.”

Anyway, it’s very depressing to see shitty AI replace real content. The whole reason I was trying to make a bowling ball is because my app (Photoleap) replaced its stock photo collection with AI.

So, great…. no more licensing fees for Photoleap. I’m sure it will save them money in the long run. But I can’t even get a bowling ball, let alone a person who doesn’t look melted and deformed.

Expand full comment
Apr 1, 2023·edited Apr 1, 2023

Maybe, or maybe “Theory of the world” is not as well defined as you make it out to be. Maybe it’s just an emergent property of sufficiently sophisticated statistical models.

The academics you’re sighting have consistently made predictions that have been proven wrong. Their ideas about intelligence are increasingly considered mistaken.

And you’ve misunderstood the significance of this moment. It’s not just that AI like ChatGPT are good autocompletes, they have fundamental properties that no predecessors had, and these properties appear to emerge from nothing more than increases in scale. This includes, in the case if GPT4, the emergence of agentic behavior, the ability to make and execute plans. Google put an LLM into a robot and it has the ability to perform novel tasks on which it wasn’t trained.

No, it’s not sentient. Maybe there won’t be a use for them that really transforms society. But you’re vastly underestimating what can be achieved with a simple input-output paradigm. The fact of the matter is that LLMs *are* something new, poorly understood, and potentially very powerful, unlike anything that has existed before.

I wish I had more time for a deeper response, but as someone working in AI, for all the hype, there’s something very real happening here.

Expand full comment

I for one welcome our new robot overlords.

Expand full comment

I must be one of the 1%. I thought the table was made of concrete. Unreinforced concrete is brittle. A steel ball dropped on it could easily break it.

Expand full comment

Someone in one of the AI Discords I follow, just a fellow experimenter in the prompting arts, said something that stuck with me. "LLMs won't change the world in the way that fire or electricity did. But they might in the way Photoshop did." I'm inclined to agree. At the very least, I'd argue this is within the realm of reasonable optimism.

All this hype is great for the big AI players' profits, not to mention an excuse for them to keep their secret sauce secret. (Because it's "dangerous", you see, and they're "worried about it". Y'know, precisely worried enough to keep things secret, but not so worried they'll refrain from releasing a potentially profitable product.)

As someone who believes in the more modest but very real potential of large language models as tools, I don't think all this hype is good for AI in general. It sucks oxygen away from interesting neurosymbolic work; the sort of thing that, even if doesn't give us AGI, will at least give us more trustworthy tools. And it sets up a backlash when the promises go unfulfilled and people see what we have right now as a "failure" because of what it isn't, rather than seeing what it is.

Expand full comment
Apr 1, 2023·edited Apr 1, 2023

If I’m understanding correctly GPT2 couldn’t do addition. GPT3 can multiply 2 digit numbers. GPT4 can do calculus at a high school level. It was never programmed to do calculus. It read some books and websites about it and figured it out.

It seems like there is more going on than a fancy autocomplete.

Expand full comment

The problem is not that machines will ever think like humans but that humans are coming more and more to think like machines. Is someone like Trump or DeSantis human anymore, or have they become transmuted into algorithm-driven robots? Machines can mimic human behavior and speech patterns; people like Trump and DeSantis don't even do that anymore.

Expand full comment
Apr 1, 2023·edited Apr 2, 2023

The notion that LLMs are simply stochastic parrots is incredibly reductionist and cannot account for many of the emergent phenomena observed, like in-context learning. You can even give ChatGPT completely nonsense novel words it's never seen before and give it an assigned value or meaning, and it will dutifully use it to solve a problem. An example would be executing a breadth first search on an adjacency list where the graph nodes are made-up terms not in the training corpus. GPT is still able to recognize this and correctly solve the problem, despite it never having seen the problem before. While I don't believe ChatGPT is completely "sentient", I reject the notion that one must experience physical phenomena to be sentient. ChatGPT not having literally tasted peanut butter doesn't make its conceptual understanding of peanut butter invalid. Was Helen Keller not sentient because she could never see animals, her family, or the sky? Was her conception of the sky fake because she could never see it? Or her conception of cows incorrect because she could never see one or hear it moo? The ability to physically interact with things you have knowledge of was never a requirement for sentience. GPT-4 has been shown to be able to take images and say what the effect of various actions on objects in the image. (e.g. what will happen in this image if I cut the string? the ball will drop. It's clear GPT has a conceptual model of the mechanics of the universe that makes it able to solve problems and have a "common sense" that the many prior models have failed to produce.

https://www.erichgrunewald.com/posts/against-llm-reductionism/

And for what it's worth, GPT-4 answers your genus/species question correctly: https://i.imgur.com/YZLfYVj.png

Expand full comment

Great post. I have been finding all of the breathless pieces about ChatGPT extremely annoying, and this does a good job of explaining why.

Expand full comment

Was the trivia question in the header image inspired by today’s NYT crossword puzzle, perhaps?

Expand full comment

In a perfectly rational world, Freddie's description of what is or isn't going on behind AI's facade would make perfect sense. But we already know how this turns out. We love to cast our every fantasy upon products, especially tech, in the hopes that we'll finally find happiness. Like it always has, the media hype is setting us up. Just keep the WarGames fans very far away.

Expand full comment