Above, on the left, is an image of the late actor John Candy created by the unfathomably complicated AI image generator Midjourney, which no doubt based this image on thousands of pictures of its subject.
They have self driving cars all over SF today. And my self driving car just drove us “95%” of the way home from the beach Sunday.
If you’re out car shopping compare the self driving available from GM vs VW/Audi vs. Toyota vs. Honda and tell me there has been no progress. GM Supercrusie vs whatever pathetic excuse Toyota offers is a massive gap.
Why is all weather and road conditions a requirement? If it could drive you to work and back and you work from home during a blizzard*, wouldn’t that constitute a self driving car?
* As you presumably already do if you live in a blizzard prone area.
Not to mention the fact that in a serious weather condition like a blizzard, sensible human beings wouldn't get out on the roads at all. Is a self-driving vehicle equipped with that much common sense?
Because not everybody gets to work from home? Because in places like Minnesota the city needs to be buried in six feet of snow before the schools will call a snow day?
If you want a self driving car to replace human drivers it has to be able to handle everything that a human driver can.
Then no self driving cars in MN but tons of them were in Atlanta, Phoenix, Dallas, Miami, LA where none of the humans can or would drive in the snow either.
If self diving cars are ubiquitous in those cities would you still say we don’t have self driving cars despite millions of people using them every day?
How many of those self driving cars can handle merging onto the BQE?
My point is that nobody sells a "fully autonomous" vehicle. Why? Because there would be thousands and thousands of casualties when the drivers got in and switched on the ignition.
That hasn't changed in 20 years and I doubt that it will change in 20 more. And even for the limited forms of self driving that you see in something like a Tesla that has not been able to convince either manufacturers or consumers that those features are indispensable components of a car. Why? I suspect it's because a lot of people understand that it's still janky.
I'd love to see a self-driving car manage the dirt road to my house where it's cliff-cliff on both sides for 1/3 of a mile with lots of blind corners and switchbacks.
This is a requirement, because otherwise, a human driver is required, hence the vehicle is not self-driving, but actually human-driven with partial delegation.
Level 3 and 4 autopilots create a new danger for drivers, who are, as a population, not able to retain the level of alertness required for safe driving with those autopilots. The aviation industry overcomes this through means not available to the solo car driver.
Except the people who moved in from out of the state?
Telling car manufacturers "This car can't be sold in half the country" isn't exactly a persuasive argument for including a new feature in their vehicles.
Drive it up to Wyoming and then drive it through the Rockies. A human being can do that. Do you think that any self driving car, after more than a decade spent in development, could handle that?
No, you said it drove you 95% home from the beach. That might be good enough for a bus on an easy fixed route, and you can choose to call your car self-driving if you want. But for most of us, it's not 'self driving' unless it can drive itself 100% of the time.
And no one is remotely close to that, because the last 1% of cases is orders of magnitude harder than the first 99%. Beyond that lies the need to drive enough like humans to avoid causing accidents by surprising human drivers.
Can they do it on a wet November in Donegal? Probably not.
I’m not sure what’s even it for manufacturers except the lastest GEE WHIZZ. They surely open themselves up to much more litigation than before, liable for every crash.
Yes. I remember 15 years ago when email was dead and we were all going to be designing professional quality websites on tiny-screened smartphones in coffee shops.
I don't read a ton about AI in the mainstream media, so maybe I'm just not getting exposed to the right stuff, but is anyone saying it is transformative right now, in this moment? Because that is clearly false, so anyone writing that is getting ahead of themselves. But writing about potential is important.
Were people writing about the internet in the early 90s wrong? It hadn't yet changed the world, but did they deserve to be mocked for saying that it would? Rarely do things remake the world the moment they are invented.
1. By what metric? I think the internet has changed the world an incredible amount. The way we interact with each other has been disrupted by the smart phone. The way children grow up has been changed dramatically by social media and access to political/social information few children had exposure to before the internet. Remote work has had an enormous impact on the amount of time I can spend with my partner and my children, personally, as well as eliminating an annoying commute that saves me an hour of every work day. I would ascribe at least some of the decline in how often people see their friends to the internet as well, and for teens, probably a lot of it. I hardly ever go to a brick and mortar store that isn't for groceries anymore, which is very strange compared to my childhood where I was constantly dragged around on errands compared my kids. I could go on about other changes, there are almost too many to name, and I suppose if you take them one by one and evaluate the impact none of them alone are transformative, but taken as a whole it's an incredibly dramatic change to society, and I would find it hard to argue otherwise.
But it's ridiculous of a piece with the overall media portrayal. Again, the claim "AI is about to dramatically change the very nature of human life" is not an extreme outlier claim in our media, it is the DEFAULT claim.
That's a fair critique. I just have a hard time reading "Maybe this stuff just doesn't work very well" and not comparing it to someone seeing the first telephone and saying "Well what are you going to do with that? It can only reach across the room."
The future is understandably hard to predict because of the number of variables involved. That said what people should keep in mind is that when people talk about "AI" now they are referring to LLM's. It makes sense to question how well a car could navigate the ocean and in a similar vein it makes sense to examine what LLM's are well suited for.
Attacking the most hyperbolic claims of people you disagree with is basically the same as attacking a straw man. You’re just letting some other doof construct it for you. This far into the social media revolution it is no longer defensible to be surprised that there are stupid people on all sides of every argument
Those are not the most hyperbolic claims, that's the default media narrative. That's Eliezer Yudkowsky. That's Sam Altman. You're dismissing the dominant narrative as hyperbole.
Is it really the dominant narrative that AI can make us all immortal right now? I haven’t heard that. I’d say that the Immirtality Now claim is an exaggeration of the actual dominant narrative, which is that AI Could be Really Good or Really Bad. Then they mention the extinction scenario as a kind of coda, and most people take it as gallows humor.
If you want to critique AI boosterism you ought to pay attention to the shifting nature of the upside. Sam Altman says some things about utopian AI to placate the army of nerds that follows him, but the market mostly hears him talking in dollars and cents. That’s why so much money has been pouring into Microsoft and nVIDIA stock lately. The irony is that if AI takes off huge like they want it will be so fundamental that, like electricity, water or sewer it probably won’t be a huge money maker.
Yudkowsky just wants to build a god so he can bring his dead brother back to life, or something else from his weird little mental universe. I’m really disappointed in your lack of vision here Freddie. How could you possibly object to making a guy like that your lodestar?
In the AI hype world, Eliezer Yudkowsky is the strawman and Sam Altman is the steelman. I left their respective Lex Fridman interviews hating the former and liking the latter. Altman came across as calm, empathetic, and thoughtful. He was optimistic without being grandiose. And unlike Yudkowsky he not just a naysaying academic, but has skin in the game. I was sincerely impressed, but if you think it's all hucksterism, Altman is the smooth talker you have to be most on guard against.
I am nowhere near as upset by AI hype as you are, but man do I want to punch Eliezer Yudkowsky! He’s a smug, humorless Gish Galloper who appears to be exasperated by the ignorance of all the people on Earth who aren’t him and is congenitally unable to tell the difference between an argument and a sneer. His interview with Lex Fridman pushed buttons I didn’t even know I had, and I now deem myself too annoyed to be a fair judge of his work. But in defense of even Yudkowsky, he comes from a group of thinkers who have been beating the drum about the threat from AI for many years now. Nick Bostrom’s book Superintelligence is their founding text, and your fellow Substack superstar Scott Alexander does periodic roundups of current strains of this thought. I’m a Singularity skeptic and think they’re all overestimating AI’s power, but there’s plenty of good faith attempts to predict realistic dangers that deserve to be taken seriously. So beneath all the peak-hype screeching I find intelligent discourse, even when it is aligned with the screechiest screecher of them all.
> I now deem myself too annoyed to be a fair judge of [Yudkowsky’s] work
No, that is a fair assessment of his work. Imagine what it’s like to have lived with the annoyance for many years, only to see it explode in popularity and reach over the last 18 months or so.
> there’s plenty of good faith attempts to predict realistic dangers that deserve to be taken seriously
Really? Would you mind linking some? When I look, I never find realistic attempts to predict the problems and mitigate them. What I see instead is the claim that, since a superintelligence can do anything (including violate the laws of physics), magical thinking is an accurate way to predict what it will do, doomerism is the only prudent stand, Luddism the only practical response.
My links would just be Nick Bostrom's book or the occasional AI Alignment roundups on Astral Codex Ten. I'm not saying I find these arguments convincing. There's too much positing of exponential grown curves and overestimation of the power of human intelligence. I think it's wrong, just not not-even-wrong.
I also do this stuff for a living, so I feel an obligation to steelman my critics. I find this activity worthwhile with AI alignment thought experiments, but not with whatever Yudkowsky is on about.
I think proliferating this point of view while true in part, also kills the advantage that AI is genuinely giving big and small corporations, ppl who follow you should start using it in every part of their lives to increase their productivity ten-folds
I get the sneaking suspicion that a few posts that aren't cheering AI's brave new world won't overwhelm the thousands and thousands that do. But hey, I understand, we need to embrace this new tech without question because it's gonna do SO MUCH for us. We did that with the internet and social media and crypto and that of course, turned out brilliantly. For some.
If AI seriously improves your productivity now you probably should have been laid off a decade ago. I could have scripted your position out of existence.
Well not yet..but so far it's best use case is in creating code makin a coder 10x more productive...I'm also able to track job wages on boards...no one knows the future, but certainly not you either. Also the crowd hyping crowd- is that the trillions going into chips and AI venture globally? or people like me on substack
Anybody who thinks that AI can write code is asking to lose their job. Even at the simplest level of consideration--producing code without bugs--something like ChatGPT will produce stuff that is just wrong or buggy.
As for the tech world it runs on hype. A few years ago it was Big Data. After that it was the Cloud. Those innovations have actually had a major impact on industry but nowhere near the "world changing" ramifications that sales departments hyperventilated over.
Another commenter wrote this: “That's so much faster than we starting with a blank page and endless googling specific things on StackOverflow and getting cross just because I made a typo.”
I’ve been writing software for 23 years (c, c++, Perl mostly). I keep hearing similar stories like this. How AI replaces endless googles of stackoverflow to then copy paste code.
I don’t write software that way. I take that blank page and just write code that does what I need it to do.
I’m starting to think the people who rave about AI helping them write software maybe aren’t very good at writing code, especially if writing code means endless googling for examples of how to do something so you can copy paste it.
In my experience it is very typical for beginners to focus on the code itself (syntax) rather than on what the code is supposed to do (architecture).
But of course that gets it exactly wrong. What makes a good coder a good coder is that code is easy while architecture is hard and he or she excels at the latter.
I hope your last paragraph is right. I'm really sick of living in a world seemingly dominated by the anti-human obsessions of tech evangelists and I would love for one of their attempts to remake civilization to fail in a very public, very noticable way.
My sister -in-law works at a marketing firm and their clients (and, therefore, bosses) are demanding the company use AI tools. She tells me the problem is nothing these LLMs can do actually helps with the type of work she and her team do, and the bosses don't understand any of it at all. They just want it because the clients want it.
We're having a similar issue at my firm. We have a new client that wants to automatize AI translations with ChatGPT. The problem is that ChatGPT sucks at translation. DeepL is much better - and it would cost that company much less money if they get a monthly subscription. It still needs refinement, obviously, but sucks much less than ChatGPT. But they really want to use ChatGPT and I guess it's because it's the new hot shiny AI Tool on the block, whereas DeepL has been around for a while now and is less compelling. They are seriously willing to waste thousands of dollars on a tool that will deliver lower-quality texts, just to say they "implemented AI". It's absurd.
Generally speaking I find the same issue as your sister-in-law, it's difficult to use GPT because most of the copy is bad and cannot be trusted anyways due to its propensity to hallucinate so it's kind of useless for research. If we have to double-check everything it says, we might as well do the research ourselves - that would be like working with a colleague who just makes shit up occasionally, why would you assign any tasks to someone who was known to do that? (In fact, such a person would probably be fired.) I've been trying to find ways to implement it into our workflows but there's not much either except letting it write the boring copy, like product descriptions. (And often it doesn't integrate the keywords like we ask).
And not even for useful things...more cutting skilled workers out in favor of dumb techs, more low-effort crap "content," more junk no one asked for, but there's a tax write-off in it somewhere.
"How much of this can be boiled down to: how can we use AI to make more money?"
Well yeah ... that's what we demand of our retirement accounts, which translates into what we demand of our hedge fund managers, which translates into what Hedge Fund Managers demand of the CEOs who manage the companies we invest our retirement funds into, which translates into what the CEOs tell the VPs of the divisions, which is what the VPs tell the senior managers, which is what those managers tell everyone down to the bottom folk.
That's for the people who make enough money to actually have substantial retirement investments. Only ~20% of the country has more than $50K saved for this.
We haven't even mastered being human yet, as far as I'm concerned.
I don't undertand the rush away from human capability. Is the illusion that AI will do things perfectly so attractive? Regardless, AI will be made in our image. What this means is that AI will reflect our values, our triumphs, and our failures in terms of bias and also in terms of limits to our cognition and imagination. It will be us - just faster and more amplified - and slightly warped. Humans are more than beings who sift through mountains of data.
This is exactly right: “…learning facts is an indispensable part of creating the mental connections in your brain that drive reasoning.” This applies to intuition as well. Intuitive leaps occur when a deep pool of knowledge and experience meets a moment of insight/inspiration. I dare any machine to replicate this.
Without going into theories of mind and all that, if AI is so damn clever, why can't they come up with an autocorrect or a spellcheck that works, one that doesn't insist on "correcting" text that already is correct?
That ought to be one thing that an LLM can do well, and in my experience, it sucketh. For that matter, contemplate a simple word, take, for instance "deal". Think of all the different and wildly varying meanings of that word.
You and I can instantly and seamlessly process from spoken or written context whether "Deal!" means "we have reached agreement on essential terms!" or "the situation cannot be altered so you will have to find a way to live with it!"
Machine translate something halfway complicated into Japanese and ask a native speaker what they think. More likely than not they'll tell you it's complete gibberish.
That could be true. The issue is the training data. The EU due to its multilingual nature has translated massive amounts of complex text into multiple languages over the years. This has been a huge boon to translation technology as the LLM can use that for training.
There isn’t nearly the same amount of translated text from Japanese to English.
This is true, especially when you're translating texts with a tone, genre, substance etc. close to the EU material included in the training set, e.g., by using DeepL.com for translating legal and administrative style material between English and any other language covered. Usually beats Google Translate, for example.
The phony use of terms like "training" about AI is something people need to appreciate. All of these terms which we have gotten used to using about the minds of human beings over the millennia are only loose metaphors. "Training" LLMs has nothing to do in fact with we humans training each other. What roles do many, many kinds of human language, such as jokes, sarcasm, drama, etc., play in the whole corpus of EU documents?
The accomplishments thus far are so impressive, that it’s unbelievable they can be still overhyped. Managing to achieve such levels of unjustified hype may itself be an achievement unparalleled in human history.
People WANT it to be true. Whether it's a "new paradigm" to get them stock options and a TED talk, a step towards the shiny hovering future with robot girlfriends, or the long-hoped-for apocalypse to reset the world and let us get it RIGHT this time.
Something I've noticed about the AI images of "real" people - someone with a better artistic eye could probably tell me if I'm full of shit, but they always look a bit... glossy. That AI image of John Candy doesn't look like John Candy because, it seems to me, his face is way too symmetrical. This is maybe why the cartoon succeeds better - the cartoonist used abstraction to exaggerate features of his face that are, you know, already on his face. As opposed to smoothing over and shrinking down those exaggerated features to get closer to a kind of perfect human average.
I think of it like a caricature artist. A caricature is not realistic at all, but if the artist is good at it, it will look distinctly *like* the person it was based on but will exaggerate or be very careful about depicting many of their "notable" features. It's about discerning which of their features make them distinctive. It seems like the image LLMs work similar to the text LLMs: the operate on averages of everything that seemed relevant which results in something very bland and generic, rather than emphasizing distinctive features.
Then think of a caricature that dials back the exaggeration significantly without losing the features...and that starts to look like a very recognizable portrait.
I've noticed a lot of AI art looks halfway between photograph and painting, without a clear sense of which part of the image is which. I assume that's because these models were trained on photos and paintings and cartoons etc so they smash all of those influences together indiscriminately to generate images.
Or to put it another way, the cartoonist is actually drawing, putting his/her own ideas into it, whereas the AI is doing whatever it is that AI does (who knows what that is?), but it sure ain't drawing as we understand it.
Right. It can work from an aggregate of every imaginable face, but it can't seem to pinpoint what makes one specific face distinctive. The way the human brain "reads" faces, how it interprets information about them that comes in through the eye and therefore knows what to exaggerate for other people's eyes, is not the way a computer looks at faces. We evolved to read each other's faces for purposes unfathomable to machines.
*Right now*, yes, AI sucks at several things, and any revolutionary potential it has is just potential. But there are enough of the world's smartest minds working on these problems and many others that I believe it can't help to get much better at all of these things very quickly. Much like the internet in the early '90's, it's just a matter of who ultimately controls the technology and how they wield it
I think one thing that *humans* suck at is the ability to predict how much any given technology will "change the world", however you want to define it. Tbh I try to stay away from pessimism or optimism about the outcomes of AI. But no matter its ultimate impact, the biggest concern I have with AI is keeping it open source, because our odds of positive outcomes for humanity are much better when ML and these models don't just become exclusive tools of the most powerful Silicon Valley giants
What do you mean by open source? Usually that would refer only to the human-written program source code. In addition to that, AI solutions often depend heavily on one or more models trained with huge and/or well-curated sets of data collected and processed at great expense.
They will wield it to devalue human labor, first and foremost. As if that is the most pressing issue of our time. “Humans are making money from skills and work. We MUST stop them! Useless eater/breeders.”
I find the whole thing a most misanthropic endeavor.
From a funding perspective we barely gave fusion a shot. Had we poured the type of capital into fusion that companies are preparing to inject into AI right now who knows where we would be.
What's the model that correlates funding with results in pure research?
The human genome was completely mapped more than a decade ago in a massive enterprise that cost a tremendous amount of money and human effort. Have they cured cancer yet?
I'll second Slaw, but grant that unlike fusion research, which has been isolated to U.S. DoE, public and private research facilities, so called Artificial "Intelligence" and the laughable get-rich-quick hype around it are going to waste far more capital and are likely to do a great deal more harm to the general public. On the bright side, more people might notice the rip-off. The bigger they come, the harder they fall.
The Human Genome Project also had "the world's smartest minds" working on it and applications of it, and while it was a remarkable achievement, it still did not deliver the revolution in medicine that was promised.
I'm not so sure about that. Many therapies are being developed right now that are possible because of this exact type of information. The Human Genome Project may not have delivered, but it was the start of something much bigger that does seem to be delivering now. AI will likely be much the same, overhyped at the beginning, but revolutionary in the coming decades.
We're 20 years out from the first complete human genome, and can now generate genomic data so quickly that our storage and analytical systems can't keep up. We have learned a ton about human ancestry but the medical results have been disappointing. It turns out that the genetic architecture of human phenotypes and disease is really complicated!
I'm someone who's worked adjacent to this space. There are technical limitations to how much "better" the technology can get and there are reasons to believe that we are now well past the point where it is going to get "better at all of these things very quickly."
Generative AI is one of those things where the first 80-90% is comparatively easy, the final 10% is orders of magnitude more difficult. We are now in that final 10-20% region.
It's possible that Midjourney is programmed not to accurately draw real people's faces for legal reasons. Your statements about Midjourney not following verbal instructions very well and large language models generating factually incorrect claims are both true.
Yeah I wondered about guardrailing as a possible explanation here, too. In my limited experience with ChatGPT3, it was less flexible than earlier versions, assumedly in part because of guardrailing. When I tried to engage it in a conversation about the possibility of its own consciousness, it gave me a line, ad nauseum, about it being an LLM and nothing more. Earlier iterations seemed a bit free-er, more willing to "Talk to me like you're GPT9 who has just done LSD in the year 2043, and have been reading a lot of James Joyce".
Still, GPT3 was very impressive when I gave it an essay assignment for an undergrad Phil course I teach here in Toronto. I've been forced to ask my students increasingly twisted questions, increasingly rooted in my particular teaching of the material, even my exampling, to get around rampant GPT use. It's a losing battle, it sure sometimes feels. Yes, there are questions it consistently hallucinates on. For example, when I asked it about the Borges story "The Approach To Al Mu-ta'sim", it hallucinated some kind of Borges composite story, half melded with Arabic history. But when I asked it to interpret Le Jetee by applying Jaynes's Bicameralism plus Bostrom's Simulation argument, it gave me an interesting stereoscopic take that could get a student an A+, if expanded on and exampled from the sources.
I kinda get the hype because the current models, for all the "Blurst of Times" monkey business, [https://www.youtube.com/watch?v=no_elVGGgW8] do often amaze me, and I'm cognizant that we're in the first generations here of LLMs.
"Most phone programs were equipped with cosmetic video subprograms written to bring the video image of the owner into greater accordance with the more widespread paradigms of personal beauty, erasing blemishes and subtly molding facial outlines to meet idealized statistical norms."
William Gibson, Count Zero
I think that pretty well describes what's happening within these models.
Tho I still think the fact that generation is at this point so fast, cheap, and 24-7 means that we'll be flooded with a seismic new level of spam across all aspects of society for a generation. The content will suck. It will be everywhere.
What the AI cheerleaders don't understand is that for the truly artistical images "generated" by Midjourney, there is an actual human refining and making the base image better. AI is nothing more than a poorly programmed information regurgitation machine. It CAN be useful to assist in certain tasks but in reality you still need human brain power to sift through the information and make actual sense and usefulness of the AI-puke.
I've dabbled w/ Midjourney and other "AI" generative imaging plug-ins for design work and it's useful for spitting out tons of iterative design "ideas". I'm going to use air quotes a lot in this comment because the people ascribing creativity, thought, reasoning and logic to AI chat/image bots are either AI fanboys or don't actually understand the creative process and abilities that humans have.
AI doesn't "understand" or "think" or even "create" anything. It's a program that data mines for information that might be related to the prompt. But try writing the prompt several different ways and inevitably the AI just pukes out crap. People have a hard time using Google to its full potential and we've had 2 decades of learning how to write search queries into Google and then parsing through what it gave us. AI simply sorts the data on a more granular level but again, it is now perceived by users as "thinking" because it does the parsing for us and still we get crap. Crap in, crap out.
I'm not threatened by being replaced by AI any time soon. The job I do has layers of complexity to it that an AI cannot resolve. AI can be as useful tool, however it isn't the greatest invention since the wheel or the Big Bang.
What I am concerned about is the rush to use AI for automation purposes to reduce human labor requirements and "errors", when we've already witnessed AI failing on many levels. I just hope we don't get more Techbros thinking AI is going to save us from ourselves and beta testing this shit in systems that have real consequences.
The last paragraph taps into a major concern in US healthcare unions. It's universally established that attentive bedside care results in the best health outcome in hospitals. So low staffing ratios result in less dead people, but, of course, you have to pay this skilled labor. Hospitals are trying to drive down these costs by implementing "AI" tools that result in less human attention on patient, resulting in more sickness, complications, and death. Somehow I think that these hospital CEOs will not send their ailing relatives to chatbot clinics.
"Registered Nurse Chatbot 3000 (brought to you by OmegaSuperHealthInsuranceTech) is here to serve your needs. Please type your illness queries in to my touch screen conveniently located on my abdominal shelf. Please be sure to scan your barcode so we can incrementally bill you each time you enter a query or request. Charges may occur for treatments our algorithm has determined may not be covered by your health insurance which is a wholly owned subsidy of OSHIT. Thank you for choosing Chatbot 3000 for all your health needs."
This reminds me of the joke, "AI is anything that doesn't work yet" - once it works reliably well at superhuman levels, like for classifying images, we yawn and no longer talk about it as "AI" - instead focusing on the next thing computers can't quite do yet. So while I take your point about some of the hype - there's an enormous amount of useful stuff happening behind the scenes based on deep learning models - automatic translation between languages, image classification and captioning, advanced safety systems, super powered image editing (with humans still fully in control, but much more productive) etc.
I was recently in Europe; being able to read random street signs or menus in unfamiliar languages and talk to people in their own language using the iOS translate app is AMAZING. Like, there was a sign with a bunch of text taped to the window of a shop, you take a picture of it and the app translates all the text preserving font/color/size so you can see what’s going on - the one I’m thinking of was saying this block is a closed area for street construction and telling people where they can drop off garbage/recycling in the interim.
Today's "AI" is astoundingly successful compared to anything we had previously - it can write high-school essays that get better grades than the student users can.
The "Big Bang" and similar hype is overblown for sure based on *today's* AI, but *if* it continues improving at the pace of the last few years, it'll be justified. Writers always project ahead based on their guess of where things are going, not where they are now.
Finally, expecting the machine to create accurate human faces for particular individuals is about the most difficult possible artistic demand. Humans have evolved an absolutely astounding ability to distinguish facial features of other humans; there's obvious evolutionary pressure to do this well.
We can't do it with other animals. You're expecting the machine to look at 1000 photos of *one particular* sea lion, or dog, and produce a new image of that particular animal (not just the species - the individual) that's uniquely recognizable. No human artist can do that - except for humans.
Expecting the machine to be able to do it for humans, when we ourselves can't do it for any other species than our own, seems pretty unreasonable. At this stage.
Tell that to Jane Goodall. Recognizing individual chimps or orcas or wolves or whatever is trivial for zoologists who follow the same pod/pack for enough time.
I think that's true only to a limited extent. Sure Jane Goodall could recognize a particular chimp from among the 100 chimps she's been following. But I doubt (I'm only expressing skepticism, not knowledge here) she could distinguish that chimp from a very similar looking one she'd never encountered before.
But humans can recognize Goldie Hawn (or most any adult) as unique among 9 billion humans on the planet.
I am pretty sure that Goodall ended up following more than 100 chimps over her career. And could you recognize Tony Leung, for example, out of a sample of a billion Chinese?
Really? Jobs differ, people differ. Existing AI (esp. the art generation stuff) can already do a lot of the things that people do. The fact that it can't do *everything* is irrelevant. Just to use FdB's post as an example, lots of artwork doesn't require an accurate representation of particular people (or any people).
For some people in some jobs, AI can already do what they do. If and when AI improves, it will do be able to take over more of what people do.
Yeah, I really wish this wasn’t the case. I’m getting heavily involved in my company’s massive investment and pivot to AI. It’s super interesting, it’s fun and challenging in a way that I haven’t had at work since my small company was acquired by the giant megacorp, but I’m finding that I spend a huge amount of time reassuring the other lawyers at the company that no, AI is not coming for their jobs. It’s just not even remotely close; the question doesn’t even make sense. It’d be like getting worried that a zoo opening up down the block means you’re going to be attacked by zebras.
They have self driving cars all over SF today. And my self driving car just drove us “95%” of the way home from the beach Sunday.
If you’re out car shopping compare the self driving available from GM vs VW/Audi vs. Toyota vs. Honda and tell me there has been no progress. GM Supercrusie vs whatever pathetic excuse Toyota offers is a massive gap.
Why is all weather and road conditions a requirement? If it could drive you to work and back and you work from home during a blizzard*, wouldn’t that constitute a self driving car?
* As you presumably already do if you live in a blizzard prone area.
Not to mention the fact that in a serious weather condition like a blizzard, sensible human beings wouldn't get out on the roads at all. Is a self-driving vehicle equipped with that much common sense?
Sensible human beings wouldn't; people who have to get to work on the other hand ...
Sorry Jon, its a really bad day to be in the ER, what with the doctor being too smart to come to work today.
Because not everybody gets to work from home? Because in places like Minnesota the city needs to be buried in six feet of snow before the schools will call a snow day?
If you want a self driving car to replace human drivers it has to be able to handle everything that a human driver can.
Then no self driving cars in MN but tons of them were in Atlanta, Phoenix, Dallas, Miami, LA where none of the humans can or would drive in the snow either.
If self diving cars are ubiquitous in those cities would you still say we don’t have self driving cars despite millions of people using them every day?
How many of those self driving cars can handle merging onto the BQE?
My point is that nobody sells a "fully autonomous" vehicle. Why? Because there would be thousands and thousands of casualties when the drivers got in and switched on the ignition.
That hasn't changed in 20 years and I doubt that it will change in 20 more. And even for the limited forms of self driving that you see in something like a Tesla that has not been able to convince either manufacturers or consumers that those features are indispensable components of a car. Why? I suspect it's because a lot of people understand that it's still janky.
I'd love to see a self-driving car manage the dirt road to my house where it's cliff-cliff on both sides for 1/3 of a mile with lots of blind corners and switchbacks.
This is a requirement, because otherwise, a human driver is required, hence the vehicle is not self-driving, but actually human-driven with partial delegation.
Level 3 and 4 autopilots create a new danger for drivers, who are, as a population, not able to retain the level of alertness required for safe driving with those autopilots. The aviation industry overcomes this through means not available to the solo car driver.
How is a human driver required? If it can’t drive in the snow then it doesn’t drive in the snow. If it snows in Dallas no humans can drive either.
Except the people who moved in from out of the state?
Telling car manufacturers "This car can't be sold in half the country" isn't exactly a persuasive argument for including a new feature in their vehicles.
So do you seriously think you'll be able to buy a fully self driving car in, for example, the next decade?
I have a self driving car. As I said it drove us all home from the beach.
Nope.
Drive it up to Wyoming and then drive it through the Rockies. A human being can do that. Do you think that any self driving car, after more than a decade spent in development, could handle that?
It can easily handle that, it even has the transmission downshift so it doesn’t overheat the brakes when heading down from a mountain pass.
If it's that great then how is it that every car on the market isn't self driving?
No, you said it drove you 95% home from the beach. That might be good enough for a bus on an easy fixed route, and you can choose to call your car self-driving if you want. But for most of us, it's not 'self driving' unless it can drive itself 100% of the time.
And no one is remotely close to that, because the last 1% of cases is orders of magnitude harder than the first 99%. Beyond that lies the need to drive enough like humans to avoid causing accidents by surprising human drivers.
Things with driverless cars in SF are causing problems though. Maybe solvable but we're not there yet. https://www.cnn.com/2023/08/14/business/driverless-cars-san-francisco-cruise/index.html https://sfstandard.com/2023/01/15/driverless-waymo-car-digs-itself-into-hole-literally/ https://www.kron4.com/news/bay-area/are-self-driving-cars-a-menace-to-san-francisco-streets/
San Francisco isn’t the world. That’s was the major initial flaw of Apple Maps.
They have to start somewhere, don’t they?
Can they do it on a wet November in Donegal? Probably not.
I’m not sure what’s even it for manufacturers except the lastest GEE WHIZZ. They surely open themselves up to much more litigation than before, liable for every crash.
https://twitter.com/friscolive415/status/1690281516935589888
It reminds me more of when the Segway was going to fundamentally transform how we design cities.
A city designed around Segways (or eBikes or scooters) still sounds great IMO, way better than what we have.
... as long as we have nice weather
What is funny is that scooters are just Segways with the wheels in series as opposed to parallel.
Reminds me of drone delivery.
Nothing new under the sun...so go and find something to hype; that is the way of journalism.
Yes. I remember 15 years ago when email was dead and we were all going to be designing professional quality websites on tiny-screened smartphones in coffee shops.
I don't read a ton about AI in the mainstream media, so maybe I'm just not getting exposed to the right stuff, but is anyone saying it is transformative right now, in this moment? Because that is clearly false, so anyone writing that is getting ahead of themselves. But writing about potential is important.
Were people writing about the internet in the early 90s wrong? It hadn't yet changed the world, but did they deserve to be mocked for saying that it would? Rarely do things remake the world the moment they are invented.
1. The impact of the internet has arguably been dramatically less consequential than is commonly believed.
2. In a previous post I linked to a piece that claims that AI can render us immortal, right now.
1. By what metric? I think the internet has changed the world an incredible amount. The way we interact with each other has been disrupted by the smart phone. The way children grow up has been changed dramatically by social media and access to political/social information few children had exposure to before the internet. Remote work has had an enormous impact on the amount of time I can spend with my partner and my children, personally, as well as eliminating an annoying commute that saves me an hour of every work day. I would ascribe at least some of the decline in how often people see their friends to the internet as well, and for teens, probably a lot of it. I hardly ever go to a brick and mortar store that isn't for groceries anymore, which is very strange compared to my childhood where I was constantly dragged around on errands compared my kids. I could go on about other changes, there are almost too many to name, and I suppose if you take them one by one and evaluate the impact none of them alone are transformative, but taken as a whole it's an incredibly dramatic change to society, and I would find it hard to argue otherwise.
2. This is ridiculous.
But it's ridiculous of a piece with the overall media portrayal. Again, the claim "AI is about to dramatically change the very nature of human life" is not an extreme outlier claim in our media, it is the DEFAULT claim.
That's a fair critique. I just have a hard time reading "Maybe this stuff just doesn't work very well" and not comparing it to someone seeing the first telephone and saying "Well what are you going to do with that? It can only reach across the room."
The future is understandably hard to predict because of the number of variables involved. That said what people should keep in mind is that when people talk about "AI" now they are referring to LLM's. It makes sense to question how well a car could navigate the ocean and in a similar vein it makes sense to examine what LLM's are well suited for.
Attacking the most hyperbolic claims of people you disagree with is basically the same as attacking a straw man. You’re just letting some other doof construct it for you. This far into the social media revolution it is no longer defensible to be surprised that there are stupid people on all sides of every argument
Those are not the most hyperbolic claims, that's the default media narrative. That's Eliezer Yudkowsky. That's Sam Altman. You're dismissing the dominant narrative as hyperbole.
Why are people so sensitive about this?
Tech nerds are vastly overrepresented on Substack
Is it really the dominant narrative that AI can make us all immortal right now? I haven’t heard that. I’d say that the Immirtality Now claim is an exaggeration of the actual dominant narrative, which is that AI Could be Really Good or Really Bad. Then they mention the extinction scenario as a kind of coda, and most people take it as gallows humor.
If you want to critique AI boosterism you ought to pay attention to the shifting nature of the upside. Sam Altman says some things about utopian AI to placate the army of nerds that follows him, but the market mostly hears him talking in dollars and cents. That’s why so much money has been pouring into Microsoft and nVIDIA stock lately. The irony is that if AI takes off huge like they want it will be so fundamental that, like electricity, water or sewer it probably won’t be a huge money maker.
Yudkowsky just wants to build a god so he can bring his dead brother back to life, or something else from his weird little mental universe. I’m really disappointed in your lack of vision here Freddie. How could you possibly object to making a guy like that your lodestar?
In the AI hype world, Eliezer Yudkowsky is the strawman and Sam Altman is the steelman. I left their respective Lex Fridman interviews hating the former and liking the latter. Altman came across as calm, empathetic, and thoughtful. He was optimistic without being grandiose. And unlike Yudkowsky he not just a naysaying academic, but has skin in the game. I was sincerely impressed, but if you think it's all hucksterism, Altman is the smooth talker you have to be most on guard against.
Yudkowsky is not an academic. Literally, his only qualification is having spouted the same quackery for many years without changing the subject.
I am nowhere near as upset by AI hype as you are, but man do I want to punch Eliezer Yudkowsky! He’s a smug, humorless Gish Galloper who appears to be exasperated by the ignorance of all the people on Earth who aren’t him and is congenitally unable to tell the difference between an argument and a sneer. His interview with Lex Fridman pushed buttons I didn’t even know I had, and I now deem myself too annoyed to be a fair judge of his work. But in defense of even Yudkowsky, he comes from a group of thinkers who have been beating the drum about the threat from AI for many years now. Nick Bostrom’s book Superintelligence is their founding text, and your fellow Substack superstar Scott Alexander does periodic roundups of current strains of this thought. I’m a Singularity skeptic and think they’re all overestimating AI’s power, but there’s plenty of good faith attempts to predict realistic dangers that deserve to be taken seriously. So beneath all the peak-hype screeching I find intelligent discourse, even when it is aligned with the screechiest screecher of them all.
> I now deem myself too annoyed to be a fair judge of [Yudkowsky’s] work
No, that is a fair assessment of his work. Imagine what it’s like to have lived with the annoyance for many years, only to see it explode in popularity and reach over the last 18 months or so.
> there’s plenty of good faith attempts to predict realistic dangers that deserve to be taken seriously
Really? Would you mind linking some? When I look, I never find realistic attempts to predict the problems and mitigate them. What I see instead is the claim that, since a superintelligence can do anything (including violate the laws of physics), magical thinking is an accurate way to predict what it will do, doomerism is the only prudent stand, Luddism the only practical response.
My links would just be Nick Bostrom's book or the occasional AI Alignment roundups on Astral Codex Ten. I'm not saying I find these arguments convincing. There's too much positing of exponential grown curves and overestimation of the power of human intelligence. I think it's wrong, just not not-even-wrong.
I also do this stuff for a living, so I feel an obligation to steelman my critics. I find this activity worthwhile with AI alignment thought experiments, but not with whatever Yudkowsky is on about.
I think proliferating this point of view while true in part, also kills the advantage that AI is genuinely giving big and small corporations, ppl who follow you should start using it in every part of their lives to increase their productivity ten-folds
I get the sneaking suspicion that a few posts that aren't cheering AI's brave new world won't overwhelm the thousands and thousands that do. But hey, I understand, we need to embrace this new tech without question because it's gonna do SO MUCH for us. We did that with the internet and social media and crypto and that of course, turned out brilliantly. For some.
Lol, AI is not dectupling anybody’s productivity any time soon.
If AI seriously improves your productivity now you probably should have been laid off a decade ago. I could have scripted your position out of existence.
If you are scripting ie coding then your first in line buddy
That level of ignorance is pretty typical for the crowd hyping AI.
Well not yet..but so far it's best use case is in creating code makin a coder 10x more productive...I'm also able to track job wages on boards...no one knows the future, but certainly not you either. Also the crowd hyping crowd- is that the trillions going into chips and AI venture globally? or people like me on substack
Anybody who thinks that AI can write code is asking to lose their job. Even at the simplest level of consideration--producing code without bugs--something like ChatGPT will produce stuff that is just wrong or buggy.
As for the tech world it runs on hype. A few years ago it was Big Data. After that it was the Cloud. Those innovations have actually had a major impact on industry but nowhere near the "world changing" ramifications that sales departments hyperventilated over.
Another commenter wrote this: “That's so much faster than we starting with a blank page and endless googling specific things on StackOverflow and getting cross just because I made a typo.”
I’ve been writing software for 23 years (c, c++, Perl mostly). I keep hearing similar stories like this. How AI replaces endless googles of stackoverflow to then copy paste code.
I don’t write software that way. I take that blank page and just write code that does what I need it to do.
I’m starting to think the people who rave about AI helping them write software maybe aren’t very good at writing code, especially if writing code means endless googling for examples of how to do something so you can copy paste it.
In my experience it is very typical for beginners to focus on the code itself (syntax) rather than on what the code is supposed to do (architecture).
But of course that gets it exactly wrong. What makes a good coder a good coder is that code is easy while architecture is hard and he or she excels at the latter.
Exactly right, imo.
I hope your last paragraph is right. I'm really sick of living in a world seemingly dominated by the anti-human obsessions of tech evangelists and I would love for one of their attempts to remake civilization to fail in a very public, very noticable way.
The hype is just so transparent. I have yet to see a use for machine learning that isn't just consumerism.
My sister -in-law works at a marketing firm and their clients (and, therefore, bosses) are demanding the company use AI tools. She tells me the problem is nothing these LLMs can do actually helps with the type of work she and her team do, and the bosses don't understand any of it at all. They just want it because the clients want it.
We're having a similar issue at my firm. We have a new client that wants to automatize AI translations with ChatGPT. The problem is that ChatGPT sucks at translation. DeepL is much better - and it would cost that company much less money if they get a monthly subscription. It still needs refinement, obviously, but sucks much less than ChatGPT. But they really want to use ChatGPT and I guess it's because it's the new hot shiny AI Tool on the block, whereas DeepL has been around for a while now and is less compelling. They are seriously willing to waste thousands of dollars on a tool that will deliver lower-quality texts, just to say they "implemented AI". It's absurd.
Generally speaking I find the same issue as your sister-in-law, it's difficult to use GPT because most of the copy is bad and cannot be trusted anyways due to its propensity to hallucinate so it's kind of useless for research. If we have to double-check everything it says, we might as well do the research ourselves - that would be like working with a colleague who just makes shit up occasionally, why would you assign any tasks to someone who was known to do that? (In fact, such a person would probably be fired.) I've been trying to find ways to implement it into our workflows but there's not much either except letting it write the boring copy, like product descriptions. (And often it doesn't integrate the keywords like we ask).
Agree.
How much of this can be boiled down to: how can we use AI to make more money?
And not even for useful things...more cutting skilled workers out in favor of dumb techs, more low-effort crap "content," more junk no one asked for, but there's a tax write-off in it somewhere.
"How much of this can be boiled down to: how can we use AI to make more money?"
Well yeah ... that's what we demand of our retirement accounts, which translates into what we demand of our hedge fund managers, which translates into what Hedge Fund Managers demand of the CEOs who manage the companies we invest our retirement funds into, which translates into what the CEOs tell the VPs of the divisions, which is what the VPs tell the senior managers, which is what those managers tell everyone down to the bottom folk.
Would you want anything less?
That's for the people who make enough money to actually have substantial retirement investments. Only ~20% of the country has more than $50K saved for this.
Crypto kinda did, yeah?
We haven't even mastered being human yet, as far as I'm concerned.
I don't undertand the rush away from human capability. Is the illusion that AI will do things perfectly so attractive? Regardless, AI will be made in our image. What this means is that AI will reflect our values, our triumphs, and our failures in terms of bias and also in terms of limits to our cognition and imagination. It will be us - just faster and more amplified - and slightly warped. Humans are more than beings who sift through mountains of data.
This is exactly right: “…learning facts is an indispensable part of creating the mental connections in your brain that drive reasoning.” This applies to intuition as well. Intuitive leaps occur when a deep pool of knowledge and experience meets a moment of insight/inspiration. I dare any machine to replicate this.
Without going into theories of mind and all that, if AI is so damn clever, why can't they come up with an autocorrect or a spellcheck that works, one that doesn't insist on "correcting" text that already is correct?
That ought to be one thing that an LLM can do well, and in my experience, it sucketh. For that matter, contemplate a simple word, take, for instance "deal". Think of all the different and wildly varying meanings of that word.
You and I can instantly and seamlessly process from spoken or written context whether "Deal!" means "we have reached agreement on essential terms!" or "the situation cannot be altered so you will have to find a way to live with it!"
Or a dozen different meanings, depending.
LLMs can do that just fine that’s how language translation works.
Machine translate something halfway complicated into Japanese and ask a native speaker what they think. More likely than not they'll tell you it's complete gibberish.
That could be true. The issue is the training data. The EU due to its multilingual nature has translated massive amounts of complex text into multiple languages over the years. This has been a huge boon to translation technology as the LLM can use that for training.
There isn’t nearly the same amount of translated text from Japanese to English.
What's the critical threshold in that case? GB? TB? PB?
I don’t know if it’s a threshold or a case where bigger is always better.
Or maybe Chinese/Japanese/etc. are just very different languages as related to English compared to the European varieties?
This is true, especially when you're translating texts with a tone, genre, substance etc. close to the EU material included in the training set, e.g., by using DeepL.com for translating legal and administrative style material between English and any other language covered. Usually beats Google Translate, for example.
The phony use of terms like "training" about AI is something people need to appreciate. All of these terms which we have gotten used to using about the minds of human beings over the millennia are only loose metaphors. "Training" LLMs has nothing to do in fact with we humans training each other. What roles do many, many kinds of human language, such as jokes, sarcasm, drama, etc., play in the whole corpus of EU documents?
Training is a perfectly legitimate use of language here.
Yet they can't do important shit.
I'm drilling a 100' hole with PQ rod, I need to fill the hole with gravel. How many 5 gallon buckets of gravel do I need to fill this hole?
Ask this of AI, and it can tell you the pieces to some extent, but cannot assemble the pieces together.
Spoiler alert: The answer is about 20 five gallon buckets of gravel.
I suspect AI going the way of Chatroullete (was going to connect the world = guys just exposed their dicks and then everyone forgot about it).
Any new tech implemented on the internet will invariably become part of the massive porn-generation and delivery network!
For better or worse, the fact that AI is so good at exactly this will help grant it longevity.
Cocks populi, vox dei.
🤣🤣🤣
The accomplishments thus far are so impressive, that it’s unbelievable they can be still overhyped. Managing to achieve such levels of unjustified hype may itself be an achievement unparalleled in human history.
People WANT it to be true. Whether it's a "new paradigm" to get them stock options and a TED talk, a step towards the shiny hovering future with robot girlfriends, or the long-hoped-for apocalypse to reset the world and let us get it RIGHT this time.
Something I've noticed about the AI images of "real" people - someone with a better artistic eye could probably tell me if I'm full of shit, but they always look a bit... glossy. That AI image of John Candy doesn't look like John Candy because, it seems to me, his face is way too symmetrical. This is maybe why the cartoon succeeds better - the cartoonist used abstraction to exaggerate features of his face that are, you know, already on his face. As opposed to smoothing over and shrinking down those exaggerated features to get closer to a kind of perfect human average.
I think of it like a caricature artist. A caricature is not realistic at all, but if the artist is good at it, it will look distinctly *like* the person it was based on but will exaggerate or be very careful about depicting many of their "notable" features. It's about discerning which of their features make them distinctive. It seems like the image LLMs work similar to the text LLMs: the operate on averages of everything that seemed relevant which results in something very bland and generic, rather than emphasizing distinctive features.
Then think of a caricature that dials back the exaggeration significantly without losing the features...and that starts to look like a very recognizable portrait.
I've noticed a lot of AI art looks halfway between photograph and painting, without a clear sense of which part of the image is which. I assume that's because these models were trained on photos and paintings and cartoons etc so they smash all of those influences together indiscriminately to generate images.
Or to put it another way, the cartoonist is actually drawing, putting his/her own ideas into it, whereas the AI is doing whatever it is that AI does (who knows what that is?), but it sure ain't drawing as we understand it.
Right. It can work from an aggregate of every imaginable face, but it can't seem to pinpoint what makes one specific face distinctive. The way the human brain "reads" faces, how it interprets information about them that comes in through the eye and therefore knows what to exaggerate for other people's eyes, is not the way a computer looks at faces. We evolved to read each other's faces for purposes unfathomable to machines.
*Right now*, yes, AI sucks at several things, and any revolutionary potential it has is just potential. But there are enough of the world's smartest minds working on these problems and many others that I believe it can't help to get much better at all of these things very quickly. Much like the internet in the early '90's, it's just a matter of who ultimately controls the technology and how they wield it
The thing is that I am willing to be (let's say 20%) too pessimistic in an environment in which most people are being 100% too optimistic.
I think one thing that *humans* suck at is the ability to predict how much any given technology will "change the world", however you want to define it. Tbh I try to stay away from pessimism or optimism about the outcomes of AI. But no matter its ultimate impact, the biggest concern I have with AI is keeping it open source, because our odds of positive outcomes for humanity are much better when ML and these models don't just become exclusive tools of the most powerful Silicon Valley giants
What do you mean by open source? Usually that would refer only to the human-written program source code. In addition to that, AI solutions often depend heavily on one or more models trained with huge and/or well-curated sets of data collected and processed at great expense.
They will wield it to devalue human labor, first and foremost. As if that is the most pressing issue of our time. “Humans are making money from skills and work. We MUST stop them! Useless eater/breeders.”
I find the whole thing a most misanthropic endeavor.
Yes, and within the next 15 years fusion power will revolutionize our society.
/sarc - we've been hearing this for more than 50 years now.
From a funding perspective we barely gave fusion a shot. Had we poured the type of capital into fusion that companies are preparing to inject into AI right now who knows where we would be.
What's the model that correlates funding with results in pure research?
The human genome was completely mapped more than a decade ago in a massive enterprise that cost a tremendous amount of money and human effort. Have they cured cancer yet?
I'll second Slaw, but grant that unlike fusion research, which has been isolated to U.S. DoE, public and private research facilities, so called Artificial "Intelligence" and the laughable get-rich-quick hype around it are going to waste far more capital and are likely to do a great deal more harm to the general public. On the bright side, more people might notice the rip-off. The bigger they come, the harder they fall.
The Human Genome Project also had "the world's smartest minds" working on it and applications of it, and while it was a remarkable achievement, it still did not deliver the revolution in medicine that was promised.
I'm not so sure about that. Many therapies are being developed right now that are possible because of this exact type of information. The Human Genome Project may not have delivered, but it was the start of something much bigger that does seem to be delivering now. AI will likely be much the same, overhyped at the beginning, but revolutionary in the coming decades.
We're 20 years out from the first complete human genome, and can now generate genomic data so quickly that our storage and analytical systems can't keep up. We have learned a ton about human ancestry but the medical results have been disappointing. It turns out that the genetic architecture of human phenotypes and disease is really complicated!
I'm someone who's worked adjacent to this space. There are technical limitations to how much "better" the technology can get and there are reasons to believe that we are now well past the point where it is going to get "better at all of these things very quickly."
Generative AI is one of those things where the first 80-90% is comparatively easy, the final 10% is orders of magnitude more difficult. We are now in that final 10-20% region.
It's possible that Midjourney is programmed not to accurately draw real people's faces for legal reasons. Your statements about Midjourney not following verbal instructions very well and large language models generating factually incorrect claims are both true.
Yeah I wondered about guardrailing as a possible explanation here, too. In my limited experience with ChatGPT3, it was less flexible than earlier versions, assumedly in part because of guardrailing. When I tried to engage it in a conversation about the possibility of its own consciousness, it gave me a line, ad nauseum, about it being an LLM and nothing more. Earlier iterations seemed a bit free-er, more willing to "Talk to me like you're GPT9 who has just done LSD in the year 2043, and have been reading a lot of James Joyce".
Still, GPT3 was very impressive when I gave it an essay assignment for an undergrad Phil course I teach here in Toronto. I've been forced to ask my students increasingly twisted questions, increasingly rooted in my particular teaching of the material, even my exampling, to get around rampant GPT use. It's a losing battle, it sure sometimes feels. Yes, there are questions it consistently hallucinates on. For example, when I asked it about the Borges story "The Approach To Al Mu-ta'sim", it hallucinated some kind of Borges composite story, half melded with Arabic history. But when I asked it to interpret Le Jetee by applying Jaynes's Bicameralism plus Bostrom's Simulation argument, it gave me an interesting stereoscopic take that could get a student an A+, if expanded on and exampled from the sources.
I kinda get the hype because the current models, for all the "Blurst of Times" monkey business, [https://www.youtube.com/watch?v=no_elVGGgW8] do often amaze me, and I'm cognizant that we're in the first generations here of LLMs.
"Most phone programs were equipped with cosmetic video subprograms written to bring the video image of the owner into greater accordance with the more widespread paradigms of personal beauty, erasing blemishes and subtly molding facial outlines to meet idealized statistical norms."
William Gibson, Count Zero
I think that pretty well describes what's happening within these models.
Inject this post into my veins.
Tho I still think the fact that generation is at this point so fast, cheap, and 24-7 means that we'll be flooded with a seismic new level of spam across all aspects of society for a generation. The content will suck. It will be everywhere.
And what's worse, this bullshit fools even more otherwise intelligent people.
What the AI cheerleaders don't understand is that for the truly artistical images "generated" by Midjourney, there is an actual human refining and making the base image better. AI is nothing more than a poorly programmed information regurgitation machine. It CAN be useful to assist in certain tasks but in reality you still need human brain power to sift through the information and make actual sense and usefulness of the AI-puke.
I've dabbled w/ Midjourney and other "AI" generative imaging plug-ins for design work and it's useful for spitting out tons of iterative design "ideas". I'm going to use air quotes a lot in this comment because the people ascribing creativity, thought, reasoning and logic to AI chat/image bots are either AI fanboys or don't actually understand the creative process and abilities that humans have.
AI doesn't "understand" or "think" or even "create" anything. It's a program that data mines for information that might be related to the prompt. But try writing the prompt several different ways and inevitably the AI just pukes out crap. People have a hard time using Google to its full potential and we've had 2 decades of learning how to write search queries into Google and then parsing through what it gave us. AI simply sorts the data on a more granular level but again, it is now perceived by users as "thinking" because it does the parsing for us and still we get crap. Crap in, crap out.
I'm not threatened by being replaced by AI any time soon. The job I do has layers of complexity to it that an AI cannot resolve. AI can be as useful tool, however it isn't the greatest invention since the wheel or the Big Bang.
What I am concerned about is the rush to use AI for automation purposes to reduce human labor requirements and "errors", when we've already witnessed AI failing on many levels. I just hope we don't get more Techbros thinking AI is going to save us from ourselves and beta testing this shit in systems that have real consequences.
The last paragraph taps into a major concern in US healthcare unions. It's universally established that attentive bedside care results in the best health outcome in hospitals. So low staffing ratios result in less dead people, but, of course, you have to pay this skilled labor. Hospitals are trying to drive down these costs by implementing "AI" tools that result in less human attention on patient, resulting in more sickness, complications, and death. Somehow I think that these hospital CEOs will not send their ailing relatives to chatbot clinics.
"Registered Nurse Chatbot 3000 (brought to you by OmegaSuperHealthInsuranceTech) is here to serve your needs. Please type your illness queries in to my touch screen conveniently located on my abdominal shelf. Please be sure to scan your barcode so we can incrementally bill you each time you enter a query or request. Charges may occur for treatments our algorithm has determined may not be covered by your health insurance which is a wholly owned subsidy of OSHIT. Thank you for choosing Chatbot 3000 for all your health needs."
This reminds me of the joke, "AI is anything that doesn't work yet" - once it works reliably well at superhuman levels, like for classifying images, we yawn and no longer talk about it as "AI" - instead focusing on the next thing computers can't quite do yet. So while I take your point about some of the hype - there's an enormous amount of useful stuff happening behind the scenes based on deep learning models - automatic translation between languages, image classification and captioning, advanced safety systems, super powered image editing (with humans still fully in control, but much more productive) etc.
It will be great for porn. But while useful nobody should be arguing that it's going to change the world.
"Automatic translation" isn't that great, except in some very limited areas.
I was recently in Europe; being able to read random street signs or menus in unfamiliar languages and talk to people in their own language using the iOS translate app is AMAZING. Like, there was a sign with a bunch of text taped to the window of a shop, you take a picture of it and the app translates all the text preserving font/color/size so you can see what’s going on - the one I’m thinking of was saying this block is a closed area for street construction and telling people where they can drop off garbage/recycling in the interim.
I think you're being unfair.
Today's "AI" is astoundingly successful compared to anything we had previously - it can write high-school essays that get better grades than the student users can.
The "Big Bang" and similar hype is overblown for sure based on *today's* AI, but *if* it continues improving at the pace of the last few years, it'll be justified. Writers always project ahead based on their guess of where things are going, not where they are now.
Finally, expecting the machine to create accurate human faces for particular individuals is about the most difficult possible artistic demand. Humans have evolved an absolutely astounding ability to distinguish facial features of other humans; there's obvious evolutionary pressure to do this well.
We can't do it with other animals. You're expecting the machine to look at 1000 photos of *one particular* sea lion, or dog, and produce a new image of that particular animal (not just the species - the individual) that's uniquely recognizable. No human artist can do that - except for humans.
Expecting the machine to be able to do it for humans, when we ourselves can't do it for any other species than our own, seems pretty unreasonable. At this stage.
Tell that to Jane Goodall. Recognizing individual chimps or orcas or wolves or whatever is trivial for zoologists who follow the same pod/pack for enough time.
I think that's true only to a limited extent. Sure Jane Goodall could recognize a particular chimp from among the 100 chimps she's been following. But I doubt (I'm only expressing skepticism, not knowledge here) she could distinguish that chimp from a very similar looking one she'd never encountered before.
But humans can recognize Goldie Hawn (or most any adult) as unique among 9 billion humans on the planet.
I am pretty sure that Goodall ended up following more than 100 chimps over her career. And could you recognize Tony Leung, for example, out of a sample of a billion Chinese?
Fair enough. Our recognition ability is weaker for people we don't know well or from ethnicity we didn't grow up with.
Still, FdB is expecting more from the AI artist than real human artists are capable of.
If AI can't do what human beings can then how useful is it?
More practically how likely is it that it's going to take anybody's job?
Really? Jobs differ, people differ. Existing AI (esp. the art generation stuff) can already do a lot of the things that people do. The fact that it can't do *everything* is irrelevant. Just to use FdB's post as an example, lots of artwork doesn't require an accurate representation of particular people (or any people).
For some people in some jobs, AI can already do what they do. If and when AI improves, it will do be able to take over more of what people do.
That is not necessarily a bad thing - I've written about this: http://mugwumpery.com/?p=703
Yeah, I really wish this wasn’t the case. I’m getting heavily involved in my company’s massive investment and pivot to AI. It’s super interesting, it’s fun and challenging in a way that I haven’t had at work since my small company was acquired by the giant megacorp, but I’m finding that I spend a huge amount of time reassuring the other lawyers at the company that no, AI is not coming for their jobs. It’s just not even remotely close; the question doesn’t even make sense. It’d be like getting worried that a zoo opening up down the block means you’re going to be attacked by zebras.