136 Comments
User's avatar
JM's avatar

Back when they were accepting them I made a bet with the AI Futures team along similar (if simplified) lines:

1) US GDP Growth Limit: The 5-year rolling Compound Annual Growth Rate (CAGR) of US Real GDP will not exceed +20% for any 5-year period ending during the bet's term.

(approximately 1.5x the historical maximum 5-year rolling Real GDP CAGR of approx. +13.3% for 1943).

I also offered a simplified version: a '1.5x maximum historical' bet:

(1.1) maximum annual real US GDP growth will not exceed 28% for any of the next five years (based on 18.9% for 1942).

These would all resolve in 2030, but they could be pulled forward.

fredm421's avatar

I liked but those numbers are absurd.

Technologies can be as revolutionary as you'd like, they're nothing if regulations don't allow them to be used.

We have known how to make flying cars for a long time. In practice, they're just illegal. We know how to build housing. We still have a housing shortage. Etc.

Assuming regulations slow down AI impact is reasonable and will therefore dilute the effects on GDP etc.

RI's avatar

This article contains 45% recycled content.

None of the Above's avatar

And 100% recycled electrons.

RI's avatar

I do hope there's an impact statement for the earth's electromagnetic spectrum.

Eh, Not Worth The Trouble's avatar

Better recycled than made up, Foster Glocester. I prefer this over Scott Alexander throwing a fake hissy fit on his real name supposedly on the verge of being published.

(I wonder if Scott's also bipolar. That whole event read like a manic episode to me...)

TheOtherKC's avatar

If nothing else, I'll wait for this puppy to pop up on Manifold and ride it for the funsies. Even if Scott won't take it, there's liable to be someone who bites.

EDIT: And it's up! Places your bets, or just see Manifold's numbers. (There's a slight edge to Freddie at time of writing.) https://manifold.markets/TimothyJohnson5c16/will-the-us-economy-still-be-normal

EDIT 2: Watching for this market, getting in early, and betting big on normality has already put me in the running for a platinum promotion, and probably a gold even if that doesn't work out. Thanks, Freddie!

Karl Zimmerman's avatar

I've been thinking a lot lately around the hype involving additive manufacturing like a decade ago. People were talking as if we were one step away from Star Trek replicators, and that within only a few years, we'd just go down to a fabrication store which would 3D print objects for us on the spot. Traditional manufacturing supply chains would begin to collapse, as hard-to-source parts would be replaced by print-on-demand. A great creative force would be unleashed - the economy of makers - and we'd head into a new era.

In the end, maybe 10% of that happened? The retail attempts at 3D printing maker labs failed. The technology developed into an essential element of prototyping, but wasn't hugely scalable. I see some YouTubers who are in the maker space use it for certain custom items. But on the whole, it's still very context dependent, and hasn't disrupted a lot of established manufacturing, nor allowed for a wide range of custom-fab startups to come to the fore.

BronxZooCobra's avatar

That seems very different. To the lay person a CNC machines turning a block of metal into a part and a 3d printer making the same part is 6 of one half dozen of another.

A computer that can drive a car or crack jokes or a million other things people thought a computer could never do is something quite different.

Karl Zimmerman's avatar

I'm not sure it's that different, because a lot of the things that people thought 3D printing can do it in fact does do - it's just that it's not economically viable to do so, which means it hasn't disrupted traditional industry.

In contrast, LLM use is so high right now because it's being massively subsidized by ChatGPT and others offering services at a huge loss. We don't know the real costs of ChatGPT, but we do know that even at the $200 per month tier, the company is running at a net loss. Most people will not use AI for personal use if it's offered at a break-even cost.

Of course, businesses could absorb say a $1,000 per month cost per end user. However, we also know that some of the more advanced models can cost $1,000 for a single query. So as utilization rises, cost rises, and there's no reason to think there's a viable price point for business as well (particularly because human beings still need to check the end results to see if it hallucinated).

Many technologies do eventually decrease in cost. However, much of the cost related to AI are fixed costs related to the actual operations of the data centers. Energy isn't going to get cheaper with further upscaling - rather the inverse. This seems to put a pretty hard limit on utilization unless there's some unrelated breakthrough in cheap power generation, or they find a way to make LLMs which are way less processor intensive.

BronxZooCobra's avatar

Or the processors use much less power - like the one in your phone vs the one in your turn of the century desktop and with 100,000 times the performance.

To get the performance of your phone in 1986 would cost $60 billion and it would use 15 Three Gorges Dams worth of power. The process has very much not stopped.

Karl Zimmerman's avatar

You're right that better chips is a possibility. But Moore's law has been broken since 2010-2015 or so. Indeed, Nvidia's CEO doesn't think it's in place any longer. This means that chip advancement is slowing down right when chip demand is taking off, which is a bad combination.

Regardless, similar to cheap power, advances here aren't really correlated with AI, unless you believe that we can ask an LLM to bootstrap itself better chips and have it work - which is IMHO is weirdo AI evangelism.

BronxZooCobra's avatar

You seem to be dealing with a lot of status quo bias. AI hit the mainstream a little over three years ago. To go from design kickoff to producing at scale a chip fab takes 3-5 years. There is a ton of optimization still to be done.

Ethan Cordray's avatar

I'd just like to point out that you're wrong that people thought that computers could never drive a car or crack jokes. See: Knight Rider, and The Jetsons. We're super good at imagining these things.

BronxZooCobra's avatar

Now that you mention it the new Alexa+ is amazing and would be ever better if they could license the Knight Rider voice.

Ethan Cordray's avatar

Oh, I fuckin' hate the new Alexa+. I turned that shit off after just a couple of minutes. Thank God they let me revert back to my good ol' dumb voice robot!

BronxZooCobra's avatar

Really - it works perfectly and knows everything. I can tell it to adjust the thermostats or close the blinds any way I want and it just works. There is no need to use exactly the correct syntax. It can also answer nearly any question perfectly.

What issues did you have with it?

Ethan Cordray's avatar

It sounds horrible and obnoxious. It intrudes upon my peace of mind.

The Upright Man.'s avatar

One of my wife's written rules of my marriage is No Dutch Ovens.

Mine is No Machine That Talks Back To Me.

Dadio's avatar

How on earth does your wife prepare your pot roasts, beef bourguignon, and short ribs? In such a dystopian home I would deeply rely on the support and encouragement of a machine that talked back to me.

SVF's avatar

That something dumb was said by a bunch of people who have no relevant knowledge, education, experience, or contact with the broader field/technology they're talking about doesn't mean much. No we didn't get star trek replicators, but nobody worth listening to actually thought we would on any reasonable timescale.

In the meantime the 3D printing industry has made enormous advances and had a significant impact on quite a few industries, and that number is growing every day. It's worked its way into real-world manufacturing on a large and accelerating scale. It's relatively common in aerospace. In consumer products. Apple has used additive manufacturing in mass-production of some of their core products, and that trend is only going to accelerate. 3D printing success isn't measured only by how much it diminishes other industries, and again nobody worth listening to said "Oh cool 3D printers, that means in just a few years we won't need surface grinders anymore!" That's not how it works.

Hard to source parts ARE being replaced on demand by 3D printing. The reason you haven't noticed this is A) you don't care, and B) it's fundamentally a niche use case, since replacing "hard to source parts on demand" isn't ever going to be a mainstay in mass-production, because if your plan from the jump includes 'hard to source parts' then you've already gone very wrong. Things aren't hard to source when you pay companies to build them for you en masse. That's kind of how that works.

Maybe don't listen to the "makers" aka 19 year old Redditors showing off their big-titted anime 3D prints. If you're using the maker "community" as your barometer for the state of 3D printing as an industry, the only thing that demonstrates is your own poor judgment.

Joshua Hughes's avatar

I was just at SHOT Show in Vegas and the supplier showcase was loaded with companies doing 3D metal printing. It's coming, just maybe not as fast as some people promised.

Karl Zimmerman's avatar

Yeah, I didn't mean to imply it's not impacting things. I've seen 3D printed houses, even small bridges. Not to mention all the 3D printed things in medicine and biotech.

Yet these technologies weren't disruptive, they were (no pun intended) additive. And economic viability is happening slowly. Which is how the vast majority of technology works.

Most "old tech" isn't buggy whip manufacturing. Even in that case, it hung on far longer than most people think. My mother remembered people still doing horse and carriage deliveries in the 1950s!

Feral Finster's avatar

I was thinking of nuclear power. Folks like David Sarnoff (who was many things, but he was - at least by human standards - neitherstupid nor a wild-eyed romantic) were predicting every home and car would have its very own little nuclear reactor.

Sleazy E's avatar

"Your opinion only matters if you make a bet" is perhaps the most pathetic of the Rationalist positions.

None of the Above's avatar

It's a reaction to the fact that making crazy predictions is a cheap form of clickbait. A bet is a tax on bullshit.

Sleazy E's avatar

It seems to me that this belief emerges from the sort of on-the-spectrum mindset they hold. Context, social interaction, and theory of mind are weak areas for most rationalists, but these skills are necessary to evaluate the trustworthiness of other humans. Without those skills, rationalists have a strong tendency to try to reduce everything to numbers and logic, resulting in a shallow and financialized perspective on the world.

None of the Above's avatar

Pundits who are mostly catering to non-autists make up bullshit all the time--this was some of Tetlock's early work, in fact. The benefit of a bet is partly just forcing people to make a testable prediction and getting someone to bother checking whether they got it right, but also to make it cost something (even only a token amount) to make inane claims.

Sleazy E's avatar

That's what reputation is for, though.

None of the Above's avatar

How many Friedman units does it take for a prominent pundit to lose credibility due to making wildly wrong predictions.

Feral Finster's avatar

It devolves into endless lawyering over definitions.

Not to mention, good luck collecting. I guess that's why british gentlemen's clubs had a "bet book" where bets and conditions were made public, so everyone could determine for themselves whether Sir Rupert Buggeringham was in fact a bounder if he didn't pay up on that bet on whether he could steal the Marquis of Sodomford's merkin.

The Upright Man.'s avatar

I declare! No man with a merkin is any man at all!

(10 pounds on the little fellow, they have guts...)

J Mann's avatar

IMHO, your opinion becomes much more interesting if you make a bet. (1) it encourages you to write down specifically what you are predicting, which lets me consider it more carefully, (2) at the end of the bet, it gives me some information (not a ton, sure, but more than otherwise) about how reliable you might be in the future, and (3) it lets me see which side of a dispute turned out to be right without one side moving the goalposts and saying they really meant something else.

I'm also interested if you are some kind of superforecaster, or you have substantial expertise in the area being predicted, but otherwise, why would my opinion or yours matter to someone on the internet.

Michael's avatar

It’s a social norm imported from the culture of traders, where it is reasonable. If people are in the business of making bets all day and then suddenly they’re too scared to bet, it’s suspicious. Not so good to inflict this on normal people.

J Mann's avatar

Why would I be interested in a normal person's opinion about AI if they don't make a bet?

Freddie has some domain experience in education, mental health, and Marxism, among other areas, and he's an outstanding stylist as a writer, but there's no particular reason to think that his opinion about AI is any more valuable than anyone else of similar intelligence, and some reason to think that it's less valuable than the opinions of people with relevant expertise. For reasons I lay out in my post below, I think that his reasoning (at least as I understand it) is flawed and doesn't address the reasons I hold my own opinions about AI.

But a bet makes things more interesting. (1) It sets out specific conditions for Freddie and Scott's predictions, and (2) if the conditions are chosen well, it allows us to say whose conditions were met after the time period runs. This (1) helps me understand their predictions better and (2) lets me test my own predictions for accuracy.

I understand the trader culture to be more about general scorekeeping of who is a better overall predictor, as well as testing confidence by seeing if people will gamble on their own predictions vs someone else's. That's not nothing, but I'm less interested in it. For example, I don't care if Scott and Freddie put money on the bet, or how much.

Joseph Shipman's avatar

“No single BLS occupational category will have lost 50% or more of jobs between now and February 14th 2029”

This condition needs to be fixed because their categories can be extremely narrow. You should have a minimum size for a category to qualify such as “1 million jobs”.

None of the Above's avatar

This seems like the most likely place for Freddie to lose the bet to me.

Most of the other stuff there is saying something like "the US economy will be recognizably the same critter in three years that it is now," which I think is almost guaranteed--only some kind of godawful catastrophe like a nuclear war or covid x 10 or something will keep that from happening three years from now, because even amazing new technology takes many years to get adopted across most industries.

But even if AI is not transformative for the whole economy, there are plenty of narrow fields where it might be transformative. A few examples that seem plausible to me: (I don't know if any of these are BLS categories)

a. There's a set of people whose job is basically to write copy---low-effort stuff to go on a website or in an ad or (back when they were economically viable) in a print publication. Existing AI tools can do that now. They can't write as well as a very good writer, but most of the people writing that kind of copy weren't amazing writers either. The market for those folks has been bad for a long time, but it seems like it is going to (perhaps already has) evaporate.

b. The same for illustrators---if you want a good-enough image to use for a blog post or something, you can get it from an AI tool and not pay anyone. Again, this isn't giving you Norman Rockwell or something, but it's giving you a good-enough image and you don't have to pay anything (or much) for it.

c. Call center employees---it seems like more and more of this job is being automated away. And while regulatory or administrative barriers may slow down stuff like cab/uber drivers being replaced by self-driving vehicles even when the technology is ready for prime-time, everybody is already trying to squeeze every last penny out of the amount they spend for customer support, helpdesk, etc., employees. The companies that happily fire their whole staff in Utah and hire some service that runs out of India or Ireland or wherever else for 30% less will absolutely be willing to switch over to ChatBot3000 that costs another 30% less and fire the Irish/Indians/whomever. Errors that make customers angry or frustrated but don't cost money immediately have not stopped a race to the bottom on these jobs so far, so an occasional hallucination from the chat bot probably won't, either.

Even worse, you have a multiple-comparisons problem. Does anyone know how often it has happened in the past that one of the many BLS categories has fallen by 50% in a 3-year span?

Joseph Shipman's avatar

Now that I’ve got you here, Scott, can I request that you please unblock me on your blog? You missed context and unfairly blocked me for using the trigger phrase “I’m done here” without understanding that it was right after I first made a point saying “X, and I explicitly don’t want to discuss the adjacent issue Y here because it’s off point and contentious” and several people kept trying to engage me in a discussion of Y (I suspect in hopes they could get me to say something about Y that would allow them to dismiss my saying X).

Chris Langston's avatar

As far as I can tell, this is the only comment so far (other than mine) that mentions specific things that AI can do right now. And I should acknowledge that I did not think of the use-case of writing short to medium advertising/infotizing copy (as well as making fanciful animations of nonexistent products - meta vaporware?) But this addition do not change my view that the only thing AI can do is stuff that isn't worth doing anyway - in both the moral and practical senses.

None of the Above's avatar

I've used Claude to speed up real work. Stuff like fixing the formatting in a file, finding references and tutorials in something I need to understand better, writing short bits of code to do some one-off task or to call some library I need to use once, etc. All of that is worthwhile. Having an AI tool transcribe spoken words, or translate some short thing from French to English, also quite useful in practice.

The question Freddie is getting at isn't whether any of the tools are useful or morally tainted, but whether they're going to radically upend the world in the near future.

Sarah Marzen's avatar

I wouldn't take the bet, but I will tell you that AI has been really useful for me in my job. It is value-added if you know how to use it.

Mike's avatar

I use it everyday at work and it's great. But I do think it's important to define terms. "AI is completely useless" is an untenable position in 2026. "AI will be widely adopted across multiple industries, but will fail to turbocharge GDP or cause mass unemployment (or for that matter make back the insane amounts being invested in it)" is quite defensible. I legitimately have no idea. My gut says Freddie would win this bet if it's kept to 3 years. In the next 20-30 years, I think we're all in for a pretty crazy ride.

Ethan Cordray's avatar

Mike, can I ask you the same questions I asked Sarah: What is your job? What specific tasks do you use AI for? Which specific products do you use? How much value would you say it adds, if you think you can quantify it? And how much do you (or your employer) currently pay to use it?

Mike's avatar

I'm a programmer/data engineer. I have a paid GPT subscription, and use it every day to help me code - I don't have it write full programs, but often have it write single methods. I ask it a lot of questions, especially about APIs and devops stuff. I'd estimate it makes me 20-30% more productive (I remember spending whole days Googling/researching documentation on how to do something and that rarely happens anymore).

I'm researching how best to use an API to have it programmatically do tasks. I still haven't wrapped my head around exactly how I will do this but it has the potential to raise productivity a lot. My company is looking into buying some solutions here at the enterprise level.

To add some context - 12 months ago I thought AI was pretty useless and wrong all the time and was I much more skeptical, and was pretty annoyed by the hype.

Ethan Cordray's avatar

Thanks, I really appreciate it!

Dave Rolsky's avatar

I'm a software dev and my experience is the same as yours. The AI can't replace me (yet?), but it definitely goes faster than me for some things. I find it's particularly good for large amounts of tedious work (like refactor every use of this API in our code base to use generics, which was literally thousands of lines of changes). I can set it on task, do some other stuff (email, slack, doc review, etc.), come back, review what it's done so far.

And some of this tedious stuff it does well are things that I just wouldn't bother doing myself because they're too annoying.

But yeah, it's maybe 20-30%. Definitely not 100%+, and I don't see it getting there any time soon.

Ethan Cordray's avatar

What is your job? What specific tasks do you use AI for? Which specific products do you use? How much value would you say it adds, if you think you can quantify it? And how much do you (or your employer) currently pay to use it?

I don't mean this to be rude, just honest and real questions. I feel like I've heard very few specific descriptions of AI use, outside of computer programming, and I need to get a better picture of what people are actually doing with it.

Sarah Marzen's avatar

I'm a professor of physics. I've used AI for coding and help with math research. It was shocking when it helped me and a collaborator with math research. Without the coding, I would have to have spent a week or so trying to extract data from old pdfs, and we would have never thought of the inequality. It was really quite useful. I've also tried it out on lesson planning. It's not as good there. I'm on the paid version of Claude, which seems pretty darn good at coding and math. ChatGPT was less good at the particular math problem that we had.

Ethan Cordray's avatar

Thanks a lot! I really appreciate the answer!

Gebus's avatar

Were you able to independently verify the code, or the data that was extracted from the PDFs? I guess I don't fully understand the application but if you're automating the extraction of information from a PDF that seems like something that would be prone to error even without AI, but I'd be particularly concerned about trusting the veracity of calculations or transcriptions from an AI system.

Sarah Marzen's avatar

I was able to verify the code, and it actually worked. Quite amazing. Really sped up what I was able to do.

Mojangles's avatar

I'm a lawyer, I use Chatgpt for making draft presentations I can tweak because it's tedious, and as a turbo google for learning about things with a slick conversational interface. I don't think it adds any value in my work, I'd pay maybe $50 a year for it?

Zaethro's avatar

Corporate insistence that AI works as advertised has and will continue to have a greater economic impact than the technology itself.

Esther Berry's avatar

I used to say this as a joke, but now I actually am beginning to believe that there is some kind of psychological connection between the evident cases of LLM-facilitated psychosis and the mass media psychosis with regards to the capacity and trajectory of AI. And maybe I'm wrong but it seems like the dividing line is people who regularly chat with LLMs vs. people who don't. I often hear that if you don't regularly use LLMs, you're going to be sadly ignorant of their true power, and whatnot. But is it possible that the experienced "insight" someone gets from harnessing the power of LLMs to do every conceivable task including idle talking is actually the experience of their brain getting mushier in a very particular way that makes them highly suggestible to various crackpot theories like "you, personally, are a god" or "AIs are going to replace all jobs in the next decade"? It's at least apparent to me that those of my undergrad students who talk to LLMs a lot develop a massive mental blockade to understanding How LLMs Work; they simply cannot believe it's text prediction.

geoduck's avatar

Perhaps this is similar to how psychedelics create the illusion of a profound experience. (Which can be very convincing and persistent!)

Esther Berry's avatar

Yeah that’s what I’m thinking. Imagine if there was a big question about whether a new kind of psychedelics was going to unlock a higher stage of human consciousness in the next 10 years. It’d be hard for there to be a balanced evaluation of the question if the only people who were having that conversation were, y’know, on drugs.

Mojangles's avatar

cars are explosion machines. an explosion can't get you anywhere, it's nonsensical. people claiming that 'cars are fun' are possibly high on the fumes (very dangerous for the environment!) and maybe brain damaged. Walking might take a little longer, but there's literally nothing an explosion machine (as I've taken to calling them!) can do that a good old fashioned pair of legs can't.

Steven S's avatar

To be fair (correct me if I'm wrong) NYT guest writers don't get to write the annoying, clickbaity titles of their articles. (I wonder if AI writes them for the Times now?)

"‘We’re All Polyamorous Now. It’s You, Me and the A.I.’" quotes AI developers and researchers that the author interviewed for her Master's degree project, and a few stats on use, but the 'millions' that she implies are 'polyamourous' with AI now are not surveyed in any meaningful way.

Liam's avatar
Feb 13Edited

I just had a professional development meeting about AI, as I'm sure many teachers have done or are doing or will do soon. A lot of it was reasonably skeptical, measured even, except that the entire project of technology in schools has led to worse outcomes and no gains and the last three decades of education have been made worse by laptops and phones and screens generally. Anyhow, though, the key is that it was 'skeptical' but not actually skeptical. It took as read things that I don't think are true at all, like the idea that LLMs are a step towards general intelligence. I think that's about as true as the idea that Richard Branson's Virgin Galactic trips were a step towards exploiting the mineral wealth of the Kuiper belt. It's absurd; the man was in the business of flying up really high and then taking a nosedive so people felt weightless, essentially skydiving with a plane around you and no feeling of wind, in order to mimic microgravity.

I feel really good about this metaphor because it is exactly the same way LLMs mimic intelligence. I had a guy who I know for a FACT is specialized in poetry stand up and tell me that LLMs work the way human brains work, that the human brain is the most wonderful LLM of all. No, dude. This wasn't true when people in the 17th century spoke of clockwork, or the 18th spoke of galvanism, and it's not true today. We're a totally different thing. We're pretty good at faking that thing! We're great at faking a lot of things like that! I saw Tupac's ghost do a concert ten years ago! Thus to LLM -> General Artificial Intelligence.

Ben Pobjie's avatar

I dunno, man. When I saw the last Mission Impossible movie last year, about an AI that tries to eliminate humanity, I thought it was just a bit of fun. But now that that is actually happening I’m a little concerned.

Philippe Saner's avatar

AI is not trying to eliminate humanity. Partly because it lacks the independent volition to try anything on its own, and partly because nobody wants to use it to eliminate humanity.

Ben Pobjie's avatar

So you’re one of them.

Georg Buehler's avatar

The solution to your "wander around the store" problem does exist, and is implemented at Home Depot. You can use your phone to search for any product, and it will tell you the aisle and bin, and how many are still in stock. This makes sense for the consumers of that store: a contractor is there to get more cut-wheels, and he's not going to wander around and make an impulse buy of a drill. Wal-Mart, however, _wants_ you to wander around; they want to maximize your time in the store, walk you past as many products as possible, all in the hope that you buy more. Some people call this experience "shopping" and they find it pleasant. I am not one of those people.

Ethan Cordray's avatar

You can also hire a computer to find products for you in a much more efficiently-designed and indexed store -- it's called an Amazon Fulfillment Center. But I don't think that these alternatives undermine Freddie's essential point, which is that human existence is fundamentally resistant to optimization, and that we are far better at imagining idealized future worlds than we can ever be at living in them.

Sharon's avatar

Sometimes a lack of efficiency can be truly delightful. I've gone a number of places and seen things when travelling that I never would have...without being confused (lost) about where I am.

The Upright Man.'s avatar

If you don't think a contractor is going to make an impulse buy of a drill, you have never spent time with a contractor.

Contractors are people, before they are contractors. And in so being, they have all of the faults of people.

Dadio's avatar

The Walmart app provides identical functionality.

Diversity of Thought's avatar

Sometimes if you ask a Home Depot employee where to find something they just locate it using their phone as well.

RT's avatar

I am not one of those people either.

Home Depot might tell you how many are in stock, but it's almost always a lie, unless it says it's out of stock.

And if reports less than 5, your trip is now a crap-shoot. Grrr.

SVF's avatar

Some, or even a lot of, skepticism is understandable. But I don't know how much more clear you could make it that you flat out aren't interested in turning your brain on when it comes to this topic. By this kind of logic, how do you know that a pregnant woman won't spontaneously give birth to a dog?? No no, I don't want to hear any "speculation" I'll believe it when I see it, because there's literally no other way to know. No I will accept absolutely no arguments of any kind until I physically see a not-dog being born. And even if I do, that only happened to THIS woman, THIS time. We don't know a dog won't be born to the next one!

Like sure I guess that's one way to view the world. Have at it. At best you're rehashing philosophy about what knowledge even is and whether we can actually ever know anything.

It's fair to debate the appropriate balance of fear/skepticism/optimism/whatever, but to just flat out insist that no, this is definitely not a thing that anyone should be concerned or excited about because it literally can't do a single thing that's useful or relevant and never will is...you're living in your own little world at that point.

What you're doing in this article isn't skepticism, it's dogmatic opposition rooted in whatever ideological obsession you have with this topic, given how passionately you write about it in a way seems to go far beyond "I'm tired of reading about it." Nobody could read these articles and conclude "Yes, this seems like a person with suitable background who approached this topic with an open mind before rationally concluding that everyone else but him was insane."

Are we in a bubble? Is there too much hype? Is there not enough hype? Will it be transformative over 5/10/20/50 years? Yes, no, maybe, nobody can literally predict the future - touche I guess, because what if tomorrow it rains donuts (prove me wrong)?

Every transformational technology of the past has taken many years, sometimes decades, to go from inception to becoming transformative in any real sense of the word - with those timelines generally getting longer the further back you go. We are barely a few years in. And whether a technology is transformative or not is almost entirely decoupled from how much or how loudly people talk about it. Something that seems lost on you.

If you have no interest in AI you could just not talk about it.vEveryone doesn't need to have to have an opinion about everything, especially if they can't be bothered to earnestly engage with the topic in good faith, one way or the other.

Liam's avatar

There's a lot of weird and misplaced hostility here. Also this isn't an article about AI, it's an article about media.

SVF's avatar

"Weird and misplaced hostility" is a perfect description of each of the many articles about AI that he writes. He clearly has some beef with the core concept. I don't buy the "I just can't live my life anymore with all these headlines breaking through the windows!" angle. Everyone else seems to be getting on fine.

Liam's avatar

I don't think so at all. His hostility is to groundless fantasizing about how we'll all be ushered into an age of unknowable delights; this is something that just about everyone should be hostile to, it's neither weird nor misplaced.

Also who is everyone else? My man, you're posting on a newsletter about political and media criticism.

Scott Beynon's avatar

"You can’t escape what you don’t like about your life because you can’t escape yourself."

This reminds of those times when I get together with old friends and play the "things I would do differently if I had my life over again" game.

It never works for me because I am not convinced that making different decisions to the admittedly bad ones I made would necessarily leave me in a better place. The problem is not with the decisions but with the person making them. I am a flawed person; unless I changed who I am by fixing some of my character flaws, it's not clear to me that I wouldn't have made equally bad (or worse) decisions and ended up in the same (or worse) place.

Bill Kittler's avatar

Walmart, Costco, and other stores periodically re-arrange the location of items in their stores precisely to prevent you from shopping efficiently. There is a balance between providing you the "opportunity" to discover things you didn't know you needed (not on your list) and making you angry that an item has been moved to somewhere else. This is "retail science", like putting candy, soda, and other impulse purchases where you must pass them to check out. They are not going to give you a map but will point you to what you're looking for should you find someone to ask.

Feral Finster's avatar

Aldi seems to do the opposite. Then again, Aldi has an interesting relationship to the rest of retail.

Dadio's avatar

The Walmart app 100% provides a map and will provide a specific location for any item with a simple search. No need to "find someone to ask." In fact, asking an employee will likely result in them pulling out a device to offer the same info from the same database.

Bill Kittler's avatar

That data goes to corporate analytics, along with anything else it can suck from your device. Anonymized, I am sure.

Dadio's avatar

Uh, yeah? So the Blob knows that Freddie wanted to find the location of melting chocolate on February 11 at a Walmart in Connecticut? How is the cost of that convenience meaningful in light of the fact that the Blob will know in 7 minutes that Freddie paid for the melting chocolate at self-checkout #3? Even if he pays with cash, his facial image is captured.

You "skeptics" that pretend that you preserve some imaginary "privacy" by denying yourselves the convenience of an app amaze me.

Bill Kittler's avatar

I do not in fact care, but I am aware that as you say, “privacy” is imaginary.