Freddie deBoer

Freddie deBoer

Share this post

Freddie deBoer
Freddie deBoer
AI Has No Noticer
Copy link
Facebook
Email
Notes
More

AI Has No Noticer

consciousness, it turns out, is a valuable property in an intelligence

Freddie deBoer's avatar
Freddie deBoer
Apr 24, 2025
∙ Paid
163

Share this post

Freddie deBoer
Freddie deBoer
AI Has No Noticer
Copy link
Facebook
Email
Notes
More
84
Share

I asked Google’s Gemini AI for a list of the richest municipalities in Connecticut; this is what its generative AI returned. I’ll forgive you for not understanding the problems that are apparent to a local. First, Old Greenwich and Riverside are not distinct municipalities at all but rather villages within Greenwich. I suppose you can argue that they should be ranked separately, as they have their own ZIP codes, but I explicitly asked for municipalities and villages/neighborhoods don’t qualify. And, anyway, the bottom of the list rather borks the whole thing regardless. Bridgeport and Hartford are two of the most notoriously economically-depressed cities in the entire country, and if New Haven isn’t quite there, it’s a lot closer to those poor cities than to Darien. There is no way to make an ordered list of Connecticut’s towns by wealth or income and find Hartford or Bridgeport anywhere near the top. And, indeed, Hartford is the very poorest municipality in the state, with an actual median annual income more like $29,000; Gemini’s figure here is only off by about 550%. Whoops. In reality there are dozens of municipalities that are richer than Hartford or Bridgeport in Connecticut, a state defined by massive wealth inequality.

Let’s ask another Connecticut-related question.

I suppose if I wanted to be particularly generous I could say that 365 is indeed over 170, but of course this is not how we use that kind of language. If there’s 365 islands just in the Thimbles then “over 170” is a flatly distorted answer to the question, and obviously so - so obvious that almost no human mind that was paying attention would miss it. And, anyway, everything about the construction here implies that the second number is a subset of the first. This is precisely the kind of natural language task that an AI has to be able to handle effectively to be useful, and this kind of failure has been common to Gemini and to all the rest. It’s also exactly the kind of problem that you’d expect from a stochastic parrot, an extremely intricate (and admittedly impressive) processing system that produces statistically-likely responses to prompts without any deeper understanding of the answer that it provides.

We have, then, three different types of error here. We have a categorization error, where Gemini misidentifies villages/neighborhoods for the explicitly-defined category of interest, municipalities; we have errors of fact, where the median income of some of the poorest cities in the state are inflated by almost $100,000, which in turn leads to the ordinal list being wrong; and we have an odd idiomatic-quantitative error, where the common phrasing “over X” (similar to “under X,” “more than X,” “less than X,” etc) leads to a total estimate, 170, that is revealed to be meaningless when a subset that follows proves to be larger than its supposed higher set. These are all potentially interesting in and of themselves, I’m sure. But to me, they point again to an overarching weakness of contemporary LLM-style “artificial intelligence”: they have no noticer. They lack the kind of error correction that could have revealed the problems here to any human observer. And indeed, they could never achieve that kind of error correction because the flawed processes that made the error will be repeated, absent a kind of higher thinking - a kind of consciousness - that actual thinking beings possess.

This post is for paid subscribers

Already a paid subscriber? Sign in
© 2025 Fredrik deBoer
Privacy ∙ Terms ∙ Collection notice
Start writingGet the app
Substack is the home for great culture

Share

Copy link
Facebook
Email
Notes
More