Note: I wrote this before news broke about Sam Altman’s ouster from OpenAI. The industry to which I kind of, sort of belong is an odd beast. Everybody’s looking for a story, but the incentives and professional conditions often keeps them from seeing one that’s staring them in the face. What gets covered, and what doesn’t, has always confused me in this way. There appears to be so much low-hanging fruit that no one bothers to pick.
As a researcher in AI, I share your frustration. The expert voices I see in the media are always the extreme ones. If a journalist were to attend one of the leading conferences in machine learning or natural language processing and talk to the people who know the most -- not the ones who spend all their time seeking publicity and seeking out, ahem, journalists -- one would plenty of good thoughts about what the technology really is, right now, and why it's exciting, and what's worth worrying about. Most of us on the ground find the boosters and the doomers a bit tiresome and their positions ungrounded in reality.
Note: I share a name with another journalist. I'm not him.
What it can do that a human can't is create content at a rate even the most prolific writer never could, and to also tailor that content to specific natural language requests from a user. It can do this at scale, too. In the same way that chainsaws and heavy equipment do the same thing a person can do and thus don't meet your standard of impressive, the amount of work you can produce using AI per unit of energy spent vs what you can get from a human is such a huge gap that it changes things.
You also gloss over the fact that it is writing mediocre 8th grade papers, something humans can do already, but this also means that papers are no longer a reliable way to evaluate a student's knowledge, and this extends well beyond 8th grade. While those who travel in elite school circles might not realize it, your average state school university does not have the highest standards for undergrad writing (as someone who used to help people with their papers, it's astounding how bad they can be at a college level).
Back in like 1991 or so i was in a community college computer lab and my buddy called me over to show me this "world wide web" thing, with this new software called a "web browser." It was slow to load, with small pictures taking minutes to come up, and I told him "nah this is bullshit it's a useless fad that will never replace the power of the command prompt." I'm glad you're going to get to enjoy a similar humbling experience with AI.
There was a time when journalists popped bubbles, discomfited the comfortable and spoke truth to power. Those days appear to be gone; journalism just supports whatever is popular.
FWIW, the faculty in my political theory department, who are not AI boosters by any stretch of the imagination, have basically given up on essays as a way of grading undergraduate political theory students because GPT-4 can genuinely reliably write B- papers.
This doesn't cross the threshold of "doing something a human can't do" (unless you take into account speed), but it's significantly above 8th grade level, right now, today.
“ The text ChatGPT produces is not special. The images Dall-E produces are not special. They’re only considered special because a machine made them, which is of obviously limited social consequence.”
Even if this were in fact the case, the efficiency of production you could get out of the GPTs would make them genius level abilities.
This is the opposite of Crypto boosting. There are so many obvious implications of Artificial Intelligence that it’s hard to see how it could miss, unless it just stops progressing for some strange reason.
“ And Sam Altman is the same as Oppenheimer because… ChatGPT gives 8th graders the ability to generate dreadfully uninspired and error-filled text instead of producing it themselves? What? What? What?”
Read about the solving of protein folding by an AI. Read about cheap and getting cheaper DNA synthesizers. Read about the Aum Shinrikyo cult and realize they employed scientists and virologists. Imagine if they could have come up with a pathogen better than anthrax or Sarin. Or hell, they could just synthesize smallpox.
Or how about ransomware that takes down multiple connected systems and adapts to avoid being pluggable by simple updates from the OS maker?
"Not what AI will do or should do or is projected to do, not an extrapolation or prediction, but a demonstration of something impressive that AI can do today. For it to be impressive, it has to do something that human beings can’t do themselves."
Why is this the right question to ask? When Einstein learned that e=mc^2 that was fundamentally a useless discovery that nobody could do anything with, and now we have enough bombs to destroy the world. When Faraday discovered electromagnetic induction, it was a party trick - for decades there are no electric motors that could outdo a man with a crank - and yet literally all of modern society now revolves around it. Even oxen can't do anything ten men couldn't, and yet domesticating beasts of burden revolutionized agriculture everywhere. Being able to do things faster and cheaper and in great quantity is often impressive in and of itself.
And this all assumes that AI can rise to replicate human-level intelligence in various fields, and then, stop there and go no farther, a great assumption in and of itself.
I work as a programmer, live in Silicon Valley, and I'm friends with a lot of programmers. Almost everyone I know is an AI skeptic, and the ones who are most annoyed about AI hype tend to be the ones who understand machine learning at the deepest level. I was out and about the other day and I overheard three separate conversations that brought up ChatGPT, and all seemed more or less annoyed or at least realistic about the technology. One was complaining to a friend that he tried to tell his coworker that ChatGPT wouldn't be suitable for a particular application, and that his coworker actually became upset and personally offended at the suggestion.
I have a feeling that a small number of very credulous, very loud people with major platforms (and a lot of money invested) dominate the conversation. AI cultists, despite their outsized power, seem to constitute a numerical minority in the tech industry. It gives me a little hope, but then I hear inside stories about various executives effectively forcing AI on their employees, and I lose that hope.
One thing AI can do that humans can't is stem separation in music production. That is, the AI can remove the vocals, say, from a track the vocals have been baked into. [A single wav file, say]. Yes, humans with good ears can do some EQ magic on a Master and minimize vox, or remove most of a kick by cutting low frequencies. But the latter, then, kills the bass guitar. The AI has been trained on gads of examples of multitrack recordings with and without their various parts. From this, it can infer what any Master track will sound like with one of its components removed.
Oppenhemmer didn't invent nukes, he's a good symbol of the nuclear age, its ambitions & perils, and an Altman or [pick some other public-facing AI leader] is a similar metonym for the larger shift.
I get a lot of the hype, though many cheques remain to be cashed, because we've seen computers achieve human-level ability in some domain [e.g. chess playing] and then shoot way past us. We're seeing signs of it equal us now in more human-y tasks, like writing GREs and Bar exams. It's not unreasonable to think it a good bet comps will surpassing us in these domains within our lifetime; and the results of this would indeed change Everything. Fire, Wheel, and Nukes are all outputs of Intelligence. We're in an Anthropocene, a possible apocalypse already, because of human-level intelligence. Anything that exceeds us is big news indeed.
Example: It is not possible to have a permanent position with shitrags such as NYT, WaPo, or The Guardian if you have any integrity at all. The NYT still hasn't returned the prize they got for their utterly pathetic Russiagate coverage. No integrity whatsoever.
I think some of the problem is lack of domain expertise. You can be smart & skeptical, but if you don't understand the details of how something works, it can be very hard to ask the right questions about it, to understand what's not being said, to call out bull for what it is on the spot. Asking questions when you don't understand many of the fundamentals often makes one look foolish, and that's death to people in the media. So... easier to go with the hype and hope you're being handed.
Most journalists are either lazy, overworked, or lazy and overworked, and are happy to believe the well crafted narratives that are delivered from the PR professionals who pitch them stories. When I did PR, it was always the goal to get some portion of our press release (or preferably, the entire narrative frame we provided) reprinted word for word in an article. And sometimes it happened!
Point being, no one is pitching a well crafted narrative about how AI is just sorta interesting, it's revolutionary or bust. Even if some nonprofit with a good PR team starts seeding the narrative that AI isn't really going anywhere, that's not an exciting headline that will get clicks. There's an element of "poptimism" ("technoptimism?") in all tech journalism, and all the incentives are skewed to produce gee whiz pieces about 3d printing, AI, whatever.
I’m an attorney and work a lot in the legal tech space. For the last six months I’ve been working with various teams on the potential for generative AI as a legal aid. It’s genuinely fun and we ARE coming up with applications that will change the way attorneys practice.
But the gap between what I read in the paper and what LLMs can actually do is absurd. “Lawyers” are always mentioned as in danger of obsolescence because of generative AI.
When I tell my colleagues that there is absolutely no chance - none, really, none, zero - that they are going to be replaced by robotbrains I can’t tell if they look skeptical or disappointed. Lawyers aren’t a happy lot.
You're being naive, Freddie. Just because the boy cried wolf in the past doesn't mean there's no wolf this time. You have to look at the facts on the ground each time.
There are 3 possible outcomes here: 1 - Nothing much happens (what you're assuming). 2 - AI ushers in a new world where human labor is not necessary for material production, and that leads to someplace between a material paradise and humans being in the position of horses (more or less useless and unwanted). 3 - AI kills us all dead dead dead, no more humans.
1 is quite possible - smart people who know the field like Robin Hanson expect this.
2 may be possible. If it happens it may be great or horrible, either way it's a huge step change in human history, fully deserving of any amount of hype.
3 may be both possible and likely. If so, we have a lot to worry about. Eliezer Yudkowsky is the most prominent voice worrying about this, but MOST real experts in modern AI think his fears have some degree of validity (there's a lot of disagreement about how much, but very few think it's nonsense).
I'm not going to repeat Eliezer's arguments except to point out that if there's even a 10% chance he's right, it's worth a lot of hype.
Would you ignore fears of global nuclear war solely because "that never happened before and it sounds like a lot of hype to me"? Esp. if you knew nothing about it?
It might not be AI, but there is something concerning that could end up being that big change you don't think you are seeing, i.e., "Big Normal." What might be happening in the brains of those toddlers who spend a large portion of their time using and watching online images, and entertainment, rather than interacting with their parents? I see this whenever I am out in a restaurant, in the park, on the bus, etc. What once was necessary for human children to develop fully into caring, compassionate, curious, intelligent, capable, and articulate adults seems to be almost entirely missing in their lives. I think this is no kinda Normal, big or otherwise. Because it's happening to children, we can't see the results that are coming, but whatever it is is happening right under our, and their noses, right now.
Where Are the AI Skepticism Stories?
As a researcher in AI, I share your frustration. The expert voices I see in the media are always the extreme ones. If a journalist were to attend one of the leading conferences in machine learning or natural language processing and talk to the people who know the most -- not the ones who spend all their time seeking publicity and seeking out, ahem, journalists -- one would plenty of good thoughts about what the technology really is, right now, and why it's exciting, and what's worth worrying about. Most of us on the ground find the boosters and the doomers a bit tiresome and their positions ungrounded in reality.
Note: I share a name with another journalist. I'm not him.
What it can do that a human can't is create content at a rate even the most prolific writer never could, and to also tailor that content to specific natural language requests from a user. It can do this at scale, too. In the same way that chainsaws and heavy equipment do the same thing a person can do and thus don't meet your standard of impressive, the amount of work you can produce using AI per unit of energy spent vs what you can get from a human is such a huge gap that it changes things.
You also gloss over the fact that it is writing mediocre 8th grade papers, something humans can do already, but this also means that papers are no longer a reliable way to evaluate a student's knowledge, and this extends well beyond 8th grade. While those who travel in elite school circles might not realize it, your average state school university does not have the highest standards for undergrad writing (as someone who used to help people with their papers, it's astounding how bad they can be at a college level).
Back in like 1991 or so i was in a community college computer lab and my buddy called me over to show me this "world wide web" thing, with this new software called a "web browser." It was slow to load, with small pictures taking minutes to come up, and I told him "nah this is bullshit it's a useless fad that will never replace the power of the command prompt." I'm glad you're going to get to enjoy a similar humbling experience with AI.
I think "Blockchain" was getting the same hype a few years ago.
There was a time when journalists popped bubbles, discomfited the comfortable and spoke truth to power. Those days appear to be gone; journalism just supports whatever is popular.
FWIW, the faculty in my political theory department, who are not AI boosters by any stretch of the imagination, have basically given up on essays as a way of grading undergraduate political theory students because GPT-4 can genuinely reliably write B- papers.
This doesn't cross the threshold of "doing something a human can't do" (unless you take into account speed), but it's significantly above 8th grade level, right now, today.
Sounds like you need to read The Coming Wave.
“ The text ChatGPT produces is not special. The images Dall-E produces are not special. They’re only considered special because a machine made them, which is of obviously limited social consequence.”
Even if this were in fact the case, the efficiency of production you could get out of the GPTs would make them genius level abilities.
This is the opposite of Crypto boosting. There are so many obvious implications of Artificial Intelligence that it’s hard to see how it could miss, unless it just stops progressing for some strange reason.
“ And Sam Altman is the same as Oppenheimer because… ChatGPT gives 8th graders the ability to generate dreadfully uninspired and error-filled text instead of producing it themselves? What? What? What?”
Read about the solving of protein folding by an AI. Read about cheap and getting cheaper DNA synthesizers. Read about the Aum Shinrikyo cult and realize they employed scientists and virologists. Imagine if they could have come up with a pathogen better than anthrax or Sarin. Or hell, they could just synthesize smallpox.
Or how about ransomware that takes down multiple connected systems and adapts to avoid being pluggable by simple updates from the OS maker?
"Not what AI will do or should do or is projected to do, not an extrapolation or prediction, but a demonstration of something impressive that AI can do today. For it to be impressive, it has to do something that human beings can’t do themselves."
Why is this the right question to ask? When Einstein learned that e=mc^2 that was fundamentally a useless discovery that nobody could do anything with, and now we have enough bombs to destroy the world. When Faraday discovered electromagnetic induction, it was a party trick - for decades there are no electric motors that could outdo a man with a crank - and yet literally all of modern society now revolves around it. Even oxen can't do anything ten men couldn't, and yet domesticating beasts of burden revolutionized agriculture everywhere. Being able to do things faster and cheaper and in great quantity is often impressive in and of itself.
And this all assumes that AI can rise to replicate human-level intelligence in various fields, and then, stop there and go no farther, a great assumption in and of itself.
I work as a programmer, live in Silicon Valley, and I'm friends with a lot of programmers. Almost everyone I know is an AI skeptic, and the ones who are most annoyed about AI hype tend to be the ones who understand machine learning at the deepest level. I was out and about the other day and I overheard three separate conversations that brought up ChatGPT, and all seemed more or less annoyed or at least realistic about the technology. One was complaining to a friend that he tried to tell his coworker that ChatGPT wouldn't be suitable for a particular application, and that his coworker actually became upset and personally offended at the suggestion.
I have a feeling that a small number of very credulous, very loud people with major platforms (and a lot of money invested) dominate the conversation. AI cultists, despite their outsized power, seem to constitute a numerical minority in the tech industry. It gives me a little hope, but then I hear inside stories about various executives effectively forcing AI on their employees, and I lose that hope.
What would AI have to do for you to admit you’re wrong and the hype was warranted? What’s the burden of proof required?
One thing AI can do that humans can't is stem separation in music production. That is, the AI can remove the vocals, say, from a track the vocals have been baked into. [A single wav file, say]. Yes, humans with good ears can do some EQ magic on a Master and minimize vox, or remove most of a kick by cutting low frequencies. But the latter, then, kills the bass guitar. The AI has been trained on gads of examples of multitrack recordings with and without their various parts. From this, it can infer what any Master track will sound like with one of its components removed.
Oppenhemmer didn't invent nukes, he's a good symbol of the nuclear age, its ambitions & perils, and an Altman or [pick some other public-facing AI leader] is a similar metonym for the larger shift.
I get a lot of the hype, though many cheques remain to be cashed, because we've seen computers achieve human-level ability in some domain [e.g. chess playing] and then shoot way past us. We're seeing signs of it equal us now in more human-y tasks, like writing GREs and Bar exams. It's not unreasonable to think it a good bet comps will surpassing us in these domains within our lifetime; and the results of this would indeed change Everything. Fire, Wheel, and Nukes are all outputs of Intelligence. We're in an Anthropocene, a possible apocalypse already, because of human-level intelligence. Anything that exceeds us is big news indeed.
The media is absolutely lacking in integrity.
Example: It is not possible to have a permanent position with shitrags such as NYT, WaPo, or The Guardian if you have any integrity at all. The NYT still hasn't returned the prize they got for their utterly pathetic Russiagate coverage. No integrity whatsoever.
I think some of the problem is lack of domain expertise. You can be smart & skeptical, but if you don't understand the details of how something works, it can be very hard to ask the right questions about it, to understand what's not being said, to call out bull for what it is on the spot. Asking questions when you don't understand many of the fundamentals often makes one look foolish, and that's death to people in the media. So... easier to go with the hype and hope you're being handed.
My guess for why:
Most journalists are either lazy, overworked, or lazy and overworked, and are happy to believe the well crafted narratives that are delivered from the PR professionals who pitch them stories. When I did PR, it was always the goal to get some portion of our press release (or preferably, the entire narrative frame we provided) reprinted word for word in an article. And sometimes it happened!
Point being, no one is pitching a well crafted narrative about how AI is just sorta interesting, it's revolutionary or bust. Even if some nonprofit with a good PR team starts seeding the narrative that AI isn't really going anywhere, that's not an exciting headline that will get clicks. There's an element of "poptimism" ("technoptimism?") in all tech journalism, and all the incentives are skewed to produce gee whiz pieces about 3d printing, AI, whatever.
Thanks for writing about this.
I’m an attorney and work a lot in the legal tech space. For the last six months I’ve been working with various teams on the potential for generative AI as a legal aid. It’s genuinely fun and we ARE coming up with applications that will change the way attorneys practice.
But the gap between what I read in the paper and what LLMs can actually do is absurd. “Lawyers” are always mentioned as in danger of obsolescence because of generative AI.
When I tell my colleagues that there is absolutely no chance - none, really, none, zero - that they are going to be replaced by robotbrains I can’t tell if they look skeptical or disappointed. Lawyers aren’t a happy lot.
You're being naive, Freddie. Just because the boy cried wolf in the past doesn't mean there's no wolf this time. You have to look at the facts on the ground each time.
There are 3 possible outcomes here: 1 - Nothing much happens (what you're assuming). 2 - AI ushers in a new world where human labor is not necessary for material production, and that leads to someplace between a material paradise and humans being in the position of horses (more or less useless and unwanted). 3 - AI kills us all dead dead dead, no more humans.
1 is quite possible - smart people who know the field like Robin Hanson expect this.
2 may be possible. If it happens it may be great or horrible, either way it's a huge step change in human history, fully deserving of any amount of hype.
3 may be both possible and likely. If so, we have a lot to worry about. Eliezer Yudkowsky is the most prominent voice worrying about this, but MOST real experts in modern AI think his fears have some degree of validity (there's a lot of disagreement about how much, but very few think it's nonsense).
I'm not going to repeat Eliezer's arguments except to point out that if there's even a 10% chance he's right, it's worth a lot of hype.
Would you ignore fears of global nuclear war solely because "that never happened before and it sounds like a lot of hype to me"? Esp. if you knew nothing about it?
It might not be AI, but there is something concerning that could end up being that big change you don't think you are seeing, i.e., "Big Normal." What might be happening in the brains of those toddlers who spend a large portion of their time using and watching online images, and entertainment, rather than interacting with their parents? I see this whenever I am out in a restaurant, in the park, on the bus, etc. What once was necessary for human children to develop fully into caring, compassionate, curious, intelligent, capable, and articulate adults seems to be almost entirely missing in their lives. I think this is no kinda Normal, big or otherwise. Because it's happening to children, we can't see the results that are coming, but whatever it is is happening right under our, and their noses, right now.