207 Comments
Comment deleted
Expand full comment

You should try the AI called “NovelAI” - it has a whole story writing thing. You can define all your characters in advance or write them as you go along and it keeps it in local memory as it writes the story. Still suffers from some of the same failures as ChatGPT but it’s an interesting offshoot.

Expand full comment

In this context "theory of the world" is just an academic way of saying "soul." It sounds great but doesn't mean anything, and begs the question about humans having a theory of the world and not just a bunch of associations.

Also, anyone who asserts that chatgpt's writing is on a college freshman level hasn't had to read the barely legible stuff college freshmen write and have been writing for years.

Expand full comment
founding
Jan 12, 2023Liked by Freddie deBoer

You can reliably make chat gpt fail the winograd test. The trick us to split up the clauses into different sentences or paragraphes. Eg:

Person 1 has attributes A, B, and C. (More info on person 1).

Person 2 has attributes D, E, F. (More info on person 2).

Person 1 wouldn’t (transitive verb) person 2 because they were (synonym for attribute associated with person 1/2).

Chat GPT doesn’t understand, so it uses statistical regularities to disambiguate. It over indexes on person 1, because that’s the more common construction. Sometimes it can pattern match on synonyms, because language models do have a concept of synonymy. But you can definitely fool it in ways you couldn’t fool a person.

Expand full comment

I appreciate these types of critiques as they seem to be a useful guide for researchers developing the next generation of systems. Personally, I expect most of this will be solved in a year or two, but we'll see. Maybe this will be the one unsolvable problem.

A quick task for everyone: Think about what would impress you three years from now. What would that system look like? Now just keep that in the back of your mind going forward. When ChatGPT 4 and 5 come out, compare them to that idea. Remember the goalposts.

If 3 years ago someone said we'd have tech than can write undergraduate level prose, almost everyone would say that would be holy-shit impressive. But now that it's arrived, it's not as good as graduate students. The next system won't be as good as your favourite author. The next won't be as good as Shakespeare.

Expand full comment

>if you’re designing a submarine, you wouldn’t try to make it function exactly like a dolphin

>For one thing, for many years human-like artificial intelligence has been an important goal; simply declaring that the human-like requirement is unimportant seems like an acknowledgment of defeat to me.

The impression I get that is that most AI researchers/developers are aiming to develop human-CALIBRE intelligence, not human-LIKE intelligence. That is, they are trying to develop AIs which are AS INTELLIGENT (or more so) than humans, even if these AIs don't process information or interpret the world the same way humans do.

To extend your metaphor further - it's true that dolphins have certain advantages over submarines, but equally true that submarines have many advantages over dolphins: they can travel greater distances, stay at sea for months at a time without refuelling, engage in naval warfare, conduct scientific research etc. Submarines are not and never were designed to do everything that a dolphin can do, so to point out that dolphins can do things submarines can't doesn't strike me as terribly relevant. Sneering that, unlike submarines, dolphins don't require human pilots won't do you a whole lot of good when your city has in fact been obliterated by an ICBM launched from a submarine. (What "your city being obliterated by an ICBM" refers to in the context of AI is left as an exercise to the reader.)

Expand full comment

“ There is no place where a theory of the world “resides” for ChatGPT, the way our brains contain theories of the world. ”

From my limited understanding that’s not correct. It’s quite possible our brains work in a similar way. On theory says that sensory inputs come in and flow through the brain producing a decision in a similar way to the neural networks that power ChatGPT. And your consciousness is the brain trying to explain what it decided.

One example that sticks with me - have you ever tried to pick up something that ended up being way heavier or way lighter than you expected. Or tried to open a door that was much harder or much easier to open? You walked up to the door or the box and your brain filtered that sensory input through similar experiences and came up with a prediction and that prediction was used to prepare you muscles for the task. That’s similar to the predicting nature of ChatGPT.

The issue then is consciousness something that manifests from that predictive process.

Expand full comment
Jan 12, 2023Liked by Freddie deBoer

ChatGPT can pass the canonical Winograd schema because it has heard the answer before. If you do a novel one, it fails. Someone posted a new one on Mastodon "The ball broke the table because it was made of steel/Styrofoam." In my test just now, it chooses ball both times.

Expand full comment

I thought the group wanted to protest for peace, but the corrupt committee wanted violence advocated.

Expand full comment
Jan 12, 2023·edited Jan 12, 2023

This is NOT me saying "who cares about this," as was just banned. But when I read pieces like this I'm always like... stop pointing out its flaws! Let's just say it's perfect and encourage researchers never to improve on it.

I agree with you that the answers are generic, but I still find it incredibly unsettling that it can produce even that, and I do not want to help them make it any better.

Expand full comment

What seems kind of amazing is how few things you end up needing theories for. Most humans' lay theories probably hold that you need them for far more than you do

Expand full comment

I think this is basically right. I don't have any quibbles with what you've written here.

I do have quibbles with people who instead say things that boil down to, "ChatGPT isn't a general AI, so there won't be general AIs." That seems more questionable to me. To be clear, I don't think that general AIs are coming very soon. But I also think that people are training themselves to be a little too skeptical about progress.

That being said, I guess it's fine for people to be skeptical about things online. The groups of people criticizing advances in AI and those building new AIs are essentially disjoint. The types of criticism I'm referring to probably don't hold much weight with the folks who are doing the building of AIs, because they aren't useful.

Expand full comment

Asking ChatGPT to imitate a particular writing style doesn't work well not because large language models are incapable, it's because ChatGPT is also pushed to write in a neutral style. I'm sure it's more complex than this but if you imagine if every prompt also had "Write your response in a neutral, friendly and passive tone because you are a support AI" then it's clearer why it's so dry. GPT-3 was much better at mimicking styles.

Expand full comment

Look around on Twitter or the web. There are dozens of examples of people "breaking" ChatGPT. From an economic standpoint I don't think it's anywhere near being valuable because it is tremendously expensive but does not offer a significant value above currently existing chat bots (on your favorite customer service site, for example).

One more thing that I'd like to point out: previous chat bots were "corruptible" because they were designed to be adaptive. ChatGPT can't be trained to spew Nazi propaganda because it is locked down. Its responses to sensitive questions are completely canned. I think it's hobbled right out of the gate.

Expand full comment

Regarding the dolphin and the submarine argument I would have to agree with Freddie. The best argument about why we want AI to mimic human intelligence is because we want AI to work with and for humans. At that point you're back to the old conundrum that since nobody really understands what human intelligence is how do you reproduce it?

Expand full comment

I can't fit everything I want to say about this into a post, but I had the good luck yesterday to read Aurelian's book review, on this Substack, followed later in the day by Sam Kriss's essay on the zairja, a medieval Arabic random text generator used as a divinatory tool. Aurelian talked about the illusionary nature of the self, and Sam described the zairja as a computer so large it included the entire universe as one of its components (because it uses astrological data).

It got me thinking - there is a lot of discussion about whether or not AI like ChatGPT is fundamentally different than human consciousness. Are we really thinking independent thoughts, or are we just doing a more advanced version of what ChatGPT does - guessing at what comes next based on our experiences? And I think at a fundamental level, we are just guessing based on our training data, too - but we've been trained on the *universe*, whatever that means, on the "real world" or at least that very good illusion of it that consists of causality and sense data, and ChatGPT is just trained on the internet.

It's one more meta-level of abstraction away from reality than we are (even granted that we are one or more levels away from whatever reality really is ourselves). At some level, AI is not going to develop a "theory of the world" until it experiences the world itself, rather than just humanity's musings on it. I don't think this is impossible, but it requires interfacing it with "eyes" and "hands" and "ears", letting it play like a toddler, throw stones in a lake, burn itself on a stove. You can only get so smart reading the Internet.

Expand full comment