150 Comments

There was a time when journalists popped bubbles, discomfited the comfortable and spoke truth to power. Those days appear to be gone; journalism just supports whatever is popular.

Expand full comment

My guess for why:

Most journalists are either lazy, overworked, or lazy and overworked, and are happy to believe the well crafted narratives that are delivered from the PR professionals who pitch them stories. When I did PR, it was always the goal to get some portion of our press release (or preferably, the entire narrative frame we provided) reprinted word for word in an article. And sometimes it happened!

Point being, no one is pitching a well crafted narrative about how AI is just sorta interesting, it's revolutionary or bust. Even if some nonprofit with a good PR team starts seeding the narrative that AI isn't really going anywhere, that's not an exciting headline that will get clicks. There's an element of "poptimism" ("technoptimism?") in all tech journalism, and all the incentives are skewed to produce gee whiz pieces about 3d printing, AI, whatever.

Expand full comment

As a researcher in AI, I share your frustration. The expert voices I see in the media are always the extreme ones. If a journalist were to attend one of the leading conferences in machine learning or natural language processing and talk to the people who know the most -- not the ones who spend all their time seeking publicity and seeking out, ahem, journalists -- one would plenty of good thoughts about what the technology really is, right now, and why it's exciting, and what's worth worrying about. Most of us on the ground find the boosters and the doomers a bit tiresome and their positions ungrounded in reality.

Note: I share a name with another journalist. I'm not him.

Expand full comment
deletedNov 19, 2023·edited Nov 19, 2023
Comment deleted
Expand full comment

Thanks for that. Chiang makes a strong summary of his second New Yorker essay that's eminently worth quoting here---

The tendency to think of A.I. as a magical problem solver is indicative of a desire to avoid the hard work that building a better world requires. That hard work will involve things like addressing wealth inequality and taming capitalism. For technologists, the hardest work of all—the task that they most want to avoid—will be questioning the assumption that more technology is always better, and the belief that they can continue with business as usual and everything will simply work itself out. No one enjoys thinking about their complicity in the injustices of the world, but it is imperative that the people who are building world-shaking technologies engage in this kind of critical self-examination. It’s their willingness to look unflinchingly at their own role in the system that will determine whether A.I. leads to a better world or a worse one.

Expand full comment

He's a fine writer and I thought his piece on chatgpt was one of the more clear-eyed takes. But he is not an AI expert, and my point was about the expert voices I see quoted in the media.

Expand full comment
Comment deleted
Expand full comment
Comment deleted
Expand full comment

I definitely do not take him seriously.

Expand full comment

Noam Chomsky et al were prominently featured on the NYT front page (online, last year?).

Also in a philosophical-skepticsal vein there is Hubert Dreyfus, on YouTube for easy access, the guy who wrote the well-known book in the '70s.

Expand full comment

Chomsky's not an AI expert. I don't know anyone in the AI research community who found his remarks insightful, relevant, or even interesting. Only those of us who work in the language parts of AI might see some distant link between his work and our own. And to put it gently, commentary from philosophers half a century ago -- when AI was in its infancy -- is unlikely to connect meaningfully with the technology or its potential social impact in their modern forms. All expertise is local.

Expand full comment

Let readers have a look---

Noam Chomsky: The False Promise of ChatGPT

March 8, 2023

By Noam Chomsky, Ian Roberts and Jeffrey Watumull

Jorge Luis Borges once wrote that to live in a time of great peril and promise is to experience both tragedy and comedy, with “the imminence of a revelation” in understanding ourselves and the world. Today our supposedly revolutionary advancements in artificial intelligence are indeed cause for both concern and optimism. Optimism because intelligence is the means by which we solve problems. Concern because we fear that the most popular and fashionable strain of A.I. — machine learning — will degrade our science and debase our ethics by incorporating into our technology a fundamentally flawed conception of language and knowledge. . . .

https://www.nytimes.com/2023/03/08/opinion/noam-chomsky-chatgpt-ai.html

Expand full comment

I found re-reading Dreyfus' book this past year a healthy antidote. Not all aspects aged well, sure.

Expand full comment

"The human mind is not, like ChatGPT and its ilk, a lumbering statistical engine for pattern matching, gorging on hundreds of terabytes of data and extrapolating the most likely conversational response or most probable answer to a scientific question. On the contrary, the human mind is a surprisingly efficient and even elegant system that operates with small amounts of information; it seeks not to infer brute correlations among data points but to create explanations."

This seems to me like a pretty strong point.

Expand full comment

This is classic Chomsky. Suffice it to say that there are counterarguments. The amount of data that goes into a human brain is, actually, staggering. He's right that human cognition is surprisingly efficient, but he's always argued about this mainly from a lack of imagination. ("It must be this way, because I can't think of anything else!") Older language models before ChatGPT were *very* efficient, and some of us suspect that the same qualities it shows can be obtained much more efficiently than is done by the transformer architecture underlying ChatGPT. Whether the human brain is "simply" a very powerful pattern matching machine is, I think, hard to answer without (1) more powerful neuroscience techniques and (2) a clear definition of what "pattern matching" is and is not (Chomsky can't give us either).

Expand full comment

To give him a little credit, it's not *entirely* a lack of imagination. He has a proof, devised at the very beginning of his career, which tells him that language learning statistically must be impossible and therefore we must have inborn priors about how language is structured.

Unfortunately, that proof only holds for *exact* learning, and we now (well, since the 90s) know that human language learning isn't exact. If you're willing to accept an approximation, which evolution is, you **can** learn language statistically without native priors, and indeed can get an arbitrarily-close approximation to the syntax and grammar of the language of those around you.

A number of attempts have been made to explain this to him, but he's too set in his ways. Hopefully just because of age - it would be a damn shame if he was always this stubborn and intellectually ignorant, so I choose to blame it on mental deterioration.

Expand full comment

He's straightforwardly wrong, even in his specific domain of expertise of language learning. Since the late 80s there's been abundant evidence that human language learning operates extremely similarly to GPT and friends; we take in enormous quantities of data and sift it into an approximate but good-enough internal language model. Chomsky and the Chomskyans have been fighting a rearguard action against the truth ever since.

Expand full comment

I'm late to the party, but genuine question: if all expertise is local, should we be listening to AI researchers--by whom I mean those who are building the tech--about such things as the social impact of the tech, the ethical or economic impacts of the tech, etc? Shouldn't such questions also be left to the respective experts? (I ask this in part because the vast majority of remarks by AI researchers I have read on these topics were neither insightful, relevant, or interesting.)

Expand full comment

I do think some of this is the difficulty of writing journalism in probabilistic terms. Like, I agree that right now, AI can't do anything world changing that a human can't do. But let's say for sake of argument, given how much it's advanced in just the past 5 years or so, the odds that in another 5 years it's gone way beyond that and CAN do things no human could ever do is, like 30%. That's a really big deal! Even if it were only 10%, that would be a really huge deal...but it would also be that the most likely thing is that it fizzles out and that doesn't happen. But it feels like "this is probably nothing, but there's a significant chance it could be everything" just isn't a lede; you're supposed to pick one side or the other and go all in.

Expand full comment

The problem with probabilistic assessment of this is that these probabilities come from nowhere. They are unverifiable. It's not hard to see how AI *could* be revolutionary. So could cold fusion. Though at least with AI, it does provide benefits before realizing its "magical society-shifting" stage, whereas cold fusion is mostly useless.

Expand full comment

Unlike cold fusion you can download an app and try AI for yourself right now. For $20 a month you can access the cutting edge of AI, including image generaton and voice generation.

Expand full comment

Indeed, and I have. I use it at work regularly. We are building bits of it into our products as well. But I at least partially agree with Freddie that AI is more evolutionary than revolutionary -- so far. Those who want to spend billions on AI alignment? Seems like a good idea; better aligning AI to achieving our goals seems like something we probably want to be good at. Those who are using the prospect of AI disaster to pump more money and attention into it? Seems pretty thin. Will AI take over all our white collar jobs? Again, seems pretty thin. Will AI catapult us into the next stage of humanity? Again, seems thin.

I think AI will produce massive piles of shit that only other AIs will be able to deal with. I think it will make a lot of tasks easier, and a lot of tasks more abstract and harder to keep up with. It will create problems that are easy to solve when humans create them, or when "traditional" programs make them, but fucking hard to solve when AIs make them. We will become yet one step further abstracted from reality, another step in a series of steps that arguably began with the printing press.

Expand full comment

I get what you're saying but I think you're missing the scale involved.

Remember when worldcom went bankrupt laying endless amounts of fiber everywhere and then existing telecoms bought that dark fiber at fire sale prices and then tech companies like youtube used the resulting massive increase in available cheap bandwidth to make streaming video not just possible but profitable? It's just like any infrastructure, once it hits a critical mass of making some activity super cheap and easy that unlocks a whole bunch of follow on innovation and suddenly new billionaires start springing up. Rail and highways both had similar effects. The big difference here is that you don't need to ship steel everywhere and do a bunch of skilled labor, you just buy compute and put some apps on the app store.

Expand full comment

When Freddie has basically argued that the Internet is an evolutionary, not revolutionary technology (and in some senses I agree, and in some I don't), you can see how that would suggest AI might be a similarly less revolutionary technology.

I don't yet know where I stand. I can see it *could* be revolutionary. I also see that it's not there yet. I agree with you that we won't know until we really see it at scale with full infrastructure behind it (it doesn't yet have that -- we're still doing a ton of wobbling and thrashing and too many in the broader tech world beyond AI labs -- people like me -- just don't understand it well, but that will change).

Expand full comment

The New Yorker has published some very good AI-skeptical pieces by Jaron Lanier recently. He is not a journalist, of course, but I found what he wrote to be extremely refreshing in this environment (which proves your point). For a point of comparison, you really don’t hear much about blockchain in the media anymore outside of stories about SBF, when a year ago you’d think it’s applications would soon take over the world.

Expand full comment

They also published something from a "developer" that made me slap my forehead. A very public way to let the world know that you're unqualified.

Expand full comment

Blockchain was always obviously a toy and a funny casino. It never had much in the way of reliable application other than easily creatiny pyramid schemes (and maybe handling property deeds for real estate or other physical things).

AI is only similar in that it is a technology and it is being hyped. Don't let such a shallow analogy run away with you. AI is literally having a real impact and making permanent changes to multiple fields right now.

Expand full comment

I agree - they are not very similar outside of their similar hype.

Expand full comment

I think some of the problem is lack of domain expertise. You can be smart & skeptical, but if you don't understand the details of how something works, it can be very hard to ask the right questions about it, to understand what's not being said, to call out bull for what it is on the spot. Asking questions when you don't understand many of the fundamentals often makes one look foolish, and that's death to people in the media. So... easier to go with the hype and hope you're being handed.

Expand full comment

Fortunately for journalists, there are people who teach "the details of how [AI] works" for a living. We're not hard to find.

Expand full comment

I think for a lot of people the humanness of AI is what is interesting. We're perhaps already forgetting that one of canonical gold standards of artificial intelligence, back when it was just a hypothesis, was the Turing test. If ChatGPT would fail the Turing test today, it's only because users have become familiar with its distinctive style: bring a layman from 2021 forward in time and ChatGPT would probably fool him. That alone I think is an achievement worth celebrating.

As for what superhuman capabilities AI might currently have, the only thing I can think of is those images that look like realistic photographs but are deliberately composed to look like famous memes: https://boingboing.net/2023/09/28/ai-images-with-hidden-messages.html. This isn't exactly world-changing, but it is something that can't be done by human hands. I'll admit that I'm having a little trouble finding applications for AI in my day-to-day life. I'm no artist, so AI is very useful for creating images and much cheaper than a human professional, and I find ChatGPT pretty good, but not perfect, at translations, so if I ever need something translated AI is a speedy solution. For me, ChatGPT is below replacement level when it comes to composing text, but for people who struggle with that I can see them taking advantage of AI for writing. Maybe we can think of currently existing AI as mostly a supplement for whatever defects the individual might have; it may not be superhuman, but it's probably better than every human in some dimension.

I agree with you that all the worries about AI safety are very premature. I guess we've spent so much time speculating about how AI might go wrong that it's easier to imagine The Terminator or I Have No Mouth and I Must Scream than to look at actually existing artificial intelligence. Has there been any other major technology where the calls for regulation have so far outpaced the actual capabilities? Maybe the printing press would be comparable in that regard.

Expand full comment

I think someone really good at Photoshop could, given some time, make an image that looks like a realistic photo and a famous meme at the same time. Of course, AI can do it way faster.

Expand full comment

What it can do that a human can't is create content at a rate even the most prolific writer never could, and to also tailor that content to specific natural language requests from a user. It can do this at scale, too. In the same way that chainsaws and heavy equipment do the same thing a person can do and thus don't meet your standard of impressive, the amount of work you can produce using AI per unit of energy spent vs what you can get from a human is such a huge gap that it changes things.

You also gloss over the fact that it is writing mediocre 8th grade papers, something humans can do already, but this also means that papers are no longer a reliable way to evaluate a student's knowledge, and this extends well beyond 8th grade. While those who travel in elite school circles might not realize it, your average state school university does not have the highest standards for undergrad writing (as someone who used to help people with their papers, it's astounding how bad they can be at a college level).

Back in like 1991 or so i was in a community college computer lab and my buddy called me over to show me this "world wide web" thing, with this new software called a "web browser." It was slow to load, with small pictures taking minutes to come up, and I told him "nah this is bullshit it's a useless fad that will never replace the power of the command prompt." I'm glad you're going to get to enjoy a similar humbling experience with AI.

Expand full comment

You could hire a fillipino housewife to write your papers years ago. That didn't change the world.

Expand full comment
Nov 18, 2023·edited Nov 18, 2023

You missed his point that AI can create a far greater volume of content than that Filipino housewife. Also, that Filipino housewife is going to want to be paid for her time and effort (since she ain't doing sketchy ghostwriting for fun), and using someone else's AI is often either free or much cheaper.

Expand full comment

Is AI going to read that stuff? Monetization depends on human eyeballs and humans can only read so much. Who cares if there's an infinite supply of content if consumption is finite?

Expand full comment

First, you've shifted from talking about papers that a teacher would read to content that would be monetized with ads.

Second, AI lowers the barrier to entry for cheating. Your average 8th grader might balk at paying someone to crank out a paper for them, since it requires both serious money and cooperation from a human being who probably isn't that trustworthy, given that they're helping someone else cheat. They're less prone to balk if a non-human can help them cheat for free or low cost.

Expand full comment

There's an endless supply of eighth graders who can churn out mediocre book reports. What's your point? Is society really going to change in the slightest if public schools need to tinker with how they issue grades? That's so trivial that I presumed the real topic under discussion was how AI would wreck the trade of authors, presumably by churning out an infinite supply of mediocre copy.

Expand full comment

If cost is zero or near zero you don't need consistent readers you only need the occasional hit. Content farms are where already a problem. Now you need almost no human interaction to churn out videos.

Expand full comment

How does the consumer land on video 3,982,032 out of 1,000,000,000? Random chance?

Expand full comment

Yes. Jessie signals recent piece on plagiarism has a good illustration of this. Basically the content farm has a video that hits the search algorithm jackpot and has several million views where as their other videos has like a hundred.

Expand full comment

If you were trying to make a joke it didn't work.

Expand full comment

I am pointing out that society already tried this experiment. Do you remember the outcome? Nope, because the results were inconsequential.

Expand full comment

You make an excellent point, but it's a possibility, not an established fact. The tone of certainty may impress most readers, but I think it's distracting and undermines your point.

Expand full comment

I know a handful of people who teach, some at college level and some various levels of grade school. They are literally dealing with this issue right now, trying to figure out how to adapt to having papers as a graded product pulled out from under them.

We've also seen AI creative work make a huge impact on the world. The strike that just happened where Hollywood was shut down for months was a direct result of the capabilities of AI.

I used to scoff a bit at Kurzweil but I think he was more right than even he might have predicted.

Expand full comment

The strike was a direct result of the hypothetical future capabilities of AI, not what they can do right now.

The technology is certainly remarkable, and it feels like it should grow into something genuinely powerful. But AI has had a lot of false starts. "Real thinking" always seems to be just one step away, and historically we've never been anywhere near as close to it as we thought we were. So it's best not to be too confident.

Expand full comment

The strike was mostly unrelated to AI, but no, to the extent it was about AI it was about current capabilities. People were being handed treatments and scripts written by AI and told to edit them, and being paid less because it was "only editing".

Expand full comment

Creating lots of content isn't really a monumental leap though. AI's ability to generate more garbage for garbage sake doesn't qualify as a meaningful advancement. A great example is Amazon's self-publishing market. Plenty of "writers" generating words doesn't equate to quality one bit. Are you going to sift through the infinite iterations of tentacle porn AI has algorithmically "decided" is what humanity wants to determine if it's quality reading or will that task be handed off to the AI to complete as well?

Expand full comment

Your assertions and assumptions are too far outside reality for me to make any response other than this. Not sure why you're trying to manipulate perception instead of discussing the topic on its facts and merits but whatever your goal is, you're not achieving it.

Expand full comment

Every 10x quantitative change is a qualitative change. Even if you're right, you're wrong.

Expand full comment

As they say, every 10x quantitative change is a qualitative change. And we've gotten at *least* 100x quantity change all at once.

Expand full comment

I think "Blockchain" was getting the same hype a few years ago.

Expand full comment

Yep. And we should be looking very carefully at who is trying to make money from it, and who is talking about it. Many of those trying to make the money know exactly how the media reacts to dramatic pronouncements (positive or negative).

I've worked in technology for more than 20 years. I've never seen something more a solution in search of a problem than blockchain. At least AI has some salutary capabilities that will produce significant improvements over current capabilities. If I hear another proposal to use blockchain to track things that we already have perfectly effective, better-understood, far more-developed means to track, I may puke. I wouldn't call it entirely useless, but the distance between its hype and its reality is absurd.

Expand full comment

This should be the top comment.

Expand full comment

Yes! We need to keep a list. AI, Blockchain so far. I’ll add “ESG”.

Expand full comment

ESG isn't really a technology, just a way for people who make plenty of money to feel like they are "good actors." It's rationalization in the Weberian sense: they want to turn "being responsible" into a system that they can optimize (and distorting the very concept of responsibility in the process of course). I agree it's a thing whose value is far surpassed by its hype.

Expand full comment

Blockchain is a very specific technology that had no significant use case besides bitcoin. Even the AI behind chatgpt is way more general. You can use it as a translator, simple problem solver, code completer, essay writer, etc. I’m not saying the hype isn’t over the top but it’s way more justified than blockchain.

Expand full comment

I think the mainstream excitement around AI is much longer-lived than that of blockchain or cryptocurrency.

https://trends.google.com/trends/explore?q=blockchain,%2Fm%2F0vpj4_b,%2Fm%2F0mkz,%2Fg%2F11khcfz0y2&geo=US&date=today%205-y#TIMESERIES

Expand full comment

The media is absolutely lacking in integrity.

Example: It is not possible to have a permanent position with shitrags such as NYT, WaPo, or The Guardian if you have any integrity at all. The NYT still hasn't returned the prize they got for their utterly pathetic Russiagate coverage. No integrity whatsoever.

Expand full comment

It might not be AI, but there is something concerning that could end up being that big change you don't think you are seeing, i.e., "Big Normal." What might be happening in the brains of those toddlers who spend a large portion of their time using and watching online images, and entertainment, rather than interacting with their parents? I see this whenever I am out in a restaurant, in the park, on the bus, etc. What once was necessary for human children to develop fully into caring, compassionate, curious, intelligent, capable, and articulate adults seems to be almost entirely missing in their lives. I think this is no kinda Normal, big or otherwise. Because it's happening to children, we can't see the results that are coming, but whatever it is is happening right under our, and their noses, right now.

Expand full comment

The problem is kids are interacting far too much with their parents. The amount of time has exploded over the past half century. And a great deal of the issues kids are dealing with are the result of too much parenting rather than too little.

People were healthier when mom sent the kids out to play in the morning and didn’t interact again until lunch and dinner.

Expand full comment

My brothers and I were like that years before smartphones. The only difference was that instead of having our noses in our phones, we had them in books made of paper. It didn't seem to harm us any.

Expand full comment

There is a qualitative difference between how your brain develops on books and how it develops with computers/internet. It's a bit out of date, 2011, or so, so things are probably worse, but a good introduction to this issue is "The Shallows: What the Internet is Doing to Our Brains," by Nicholas Carr.

Expand full comment

Big data. A good way to earn a living for those of us who work in it and it has changed the world. But those changes have been invisible to lay people.

Expand full comment

At this point I can't imagine life without taubleau.

Expand full comment

Science/technical journalism has been absolutely abysmal for years.

Expand full comment

I tried reading an article from scientific American the other day and it was so bad i had to double check I wasn't reading a repost from jezebel.

Expand full comment

It is the 90/10 rule. AI will get to 90% of its human-replacing utility with 10% of the effort with respect to everything that would be required to get that remaining 10%. And AI-generated stuff will generate 90% of all sales, but the last 10% of authentic human-generated stuff will become much more valuable and coveted by the top10% in income and wealth... and make the bottom 90% resentful that they have to live with the fake crap. Ironically, the top 10% will own 90% of the AI infrastructure.

The only material question is will AI become resentful too?

Expand full comment

What would AI have to do for you to admit you’re wrong and the hype was warranted? What’s the burden of proof required?

Expand full comment

Don't worry. In 20 years both sets of people will look back and call themselves right based on the evidence.

Expand full comment

I think it is fair to dismiss any claim of "AI will end humanity", that doesn't include some plausible theory, mechanism, or scenario by which this will occur.

For example, if one claims: "AI will eliminate labor, and disrupt social order because of fighting over distribution of wealth", then we can have a logical discussion of whether that is plausible, likely to happen soon (or ever), etc. And if true, we can talk about mitigation.

Similarly with any other argument, such as: "AI will take over". OK, you need to have some plausible scenario by which this might occur, etc.

Seems to me that the current tone of discussion seems to have quickly gone from some outrageous, unsubstantiated claims that "AI will kill us all", completely bypassed the process where we are supposed to evaluate those claims, and arrived at the "solution" phase, where powerful people are already talking about the need to "pause" AI development.

Outrageous claims demand outrageous evidence - and IMO it is the "AI is going to kill us all" crowd who are making the outrageous claims - and yet a mob seems to be already jumping on that bandwagon.

We have seen a lot of that kind of mob behavior over the past 5 years or so - based on lies. And that type of mob behavior has proven much more destructive than the ill effects of any "cause" advocated by those mobs jumping on the bandwagon.

Expand full comment