Those decimal points! They give me trouble, too. Often when I've included fractions of that sort in my writing, I keep turning them over in my head afterward to be sure that I got it right, or I check them by doing the math of it. Sometimes I realize I've gotten the number(s) wrong, and find it necessary to go back with a correction.
I couldn't agree with this more. In fairness, Sapiens is the only book I've read of his, but the idea that he was writing in some intentionally provocative way to fool people or benefit financially strikes me as amazingly wrongheaded.
Also, while Harari could be wrong about AI, Freddie's critique seems like another example of him essentially saying 'anyone that disagrees with me has nefarious motives or at best, is blind to their own motives for believing what they do.' I find it to be FdB's worst trait as a writer, and I say that as someone who finds FdB to literally the most interesting writer out there now, in any medium.
Just responding to your nomination of someone giving an utterly quotidian and banal list of dates for the start of physics, chemistry, biology, and intelligent life as the *best first page of any non-fiction book ever written*…seriously…how many books have you *read*? This page is utterly unoriginal and would’ve been unremarkable to any educated person starting from about the time of the First World War. I strongly suggest you pick up literally any non-fiction book from before 1945 you’ve heard of repeatedly. It will have a better first page.
First paragraph, I completely agree, loved both books, mostly.
Second paragraph, disagree. I can't stand Yglesias, and yes, irrationally.
Third paragraph, agree. But I think he is right in every sense.
Fourth paragraph, disagree. Basically all that matters most for human existence is the next birth. Rating importance of a particular time is just as Freddie states, it's our irrational nature acting, well, irrational.
This is a tough one for me. If we're arguing technology, well, Freddie has argued in the past that technological change last century was more influential rockets, cars, etc. than iPhones, communications, etc. I disagree with him there, the information revolution will be way more important and influential than the industrial revolution, in a way we can't predict. It would be like knowing in advance the changes the printing press will cause. But I don't think that's what he's addressing here. I think this is more on a personal level, and in that case sweeping historical events don't matter as much. Our particular time and place will always be the most important because it's the one we live in. It's all relative, there are no absolutes. A hunter gatherer would see changes that happened in his lifetime that to us would seem negligible, but to him, because of the time he lived, the changes would seem monumental. I think Freddie is addressing that aspect of our natures.
Relatedly, I think it's pretty clear that there was some actual saving of the world that happened over the last 100 years, which is not something that really needed to be done before (though I guess there could have been a saving of humans as a species very early on or during a population bottleneck, for all I know).
I liked it a lot. It might be my favorite nonfiction book. I found it to be very engaging and I learned a lot about the science and decision-making behind the Manhattan Project and everything leading up to it. Richard Rhodes is an impressive author. I'm amazed that one person could do so much research, acquire such a good understanding of the science, engineering, and politics, and also be a such a talented writer.
Some of my friends also liked it, while a couple thought it was okay, but involved too much "Bob met with Alice in May 1941, and they discussed <topic>, but Alice did not send a letter to Carol about their conversation until August."
The history part of Harari's book Sapiens is not outright awful. But there's nothing groundbreaking about it. It's handily outclassed by earlier overviews, for example some of the works of Daniel Boorstin- notably his book The Discoverers (1983) https://en.wikipedia.org/wiki/The_Discoverers. The history found in Sapiens reminds me of a highly condensed, breezier version of The Discoverers.
As for Harari's futurism, I can't think of a single prediction or speculation by him that wasn't previously espoused more entertainingly and in considerably more thought-provoking detail in books from previous decades, notably:
Utopia Or Oblivion, by R Buckminster Fuller (1969)
Upwingers, F M Esfandiary ((1973)
Infopolitics, by Timothy Leary (1977)
Critical Path, by Buckminster Fuller (1981)
Right Where You Are Sitting Now, Robert Anton Wilson (1982)
Coincidance, R A Wilson (1988)
Quantum Psychology, R A Wilson (1990)
I find a lot of the speculations in those books to be questionable, even outright unpalatable or unacceptable to my purview. Some are wildly optimistic, to the point of absurdity. Some of the predictions have been disproven in the time since the conjecture was published. But the books were all fun to read. And some of the predictions have proved to be really on-point.
Special extra credit to Robert Anton Wilson's sci-fi trilogy, Schrodinger's Cat. The Illuminatus trilogy that he co-authored with Robert Shea is also a fun ride. Not nearly as ambitious as the Schrodinger's Cat Trilogy, but a linear narrative that's easier to figure out. Schrodinger's Cat is written as a set of interlaced narratives, the way David Mitchell wrote Cloud Atlas. But Mitchell's book is comparatively easy to sort out. I still haven't wrapped my mind around the multiple interactive chapters of Schrodinger's Cat trilogy. But that isn't required to make it enjoyable to read. The satire is not only scathing, the amount of prescience in the speculations is uncanny. Aspects of Wilson's work are up there with Jules Verne and H. G. Wells, in terms of ability to read where Wilson's contemporary era--the second half of the 20th century--was leading, and the developments it was beckoning. It's wild.
The Discoverers is just plain a better book. A tour de force. Free for unlimited borrowing on https://archive.org/details/B-001-024-356 ( I don't hate Noah Yuval Harari! He wrote an op-ed on the Gaza War that I read in the WaPo a while back and found really thoughtful and articulate. He's a capable historian. He's just been over-promoted. The achievements he's being lauded for in the popular press belong more rightfully to the authors I listed, who preceded him.)
I've learned to appreciate futurism as a way of stimulating my sense of possibility. The futurist works I listed are capable of jump-starting all sort of ideas that are...way far out. And then it's time to work your way back in toward the sense of probability. In my acquaintance with that exercise, I have to admit that there are some far-out possibilities that are more probable than I would have imagined at the outset. But there are traps, too.
Another reason for reading the list I shared- of books published decades ago- is to read them with an eye toward trying to pick up on how many of the social and technological speculations have actually become fact, in whole or in part, in the years since they were published. (But only a fool adopts a conspiratorial narrative frame on that basis. Beware of the post hoc propter hoc fallacy.)
There are also some deep ontological and psychological challenges posed in some of those writings that make for good mind-strengthening exercises to engage with. While realizing that the notion of "proving" or "disproving" the propositions is pretty much out of the question. And also admitting that by and large they aren't at the top of the list of priorities to be addressed in the course of existence, for most of us. There's such a thing as dealing too much with cosmic questions, of the sort that are of necessity a solitary and reflective quest. At least for most of us. Few of us make a concentration of it, the way desert monks and anchorites have been known to do. For most of us, discovering redeeming purpose lies elsewhere, mundane though some of those duties might be.
There's actually a case to be made that dedicated service to others leads to transcendence more quickly and directly than even very intensive regimes directed at introspective endeavors. Although a service regime can also lead to burnout. I've done some of the real nasty daily grind tasks of it, changing diapers and so forth. That's a daily thing, for some service workers. If I were a practical nurse or an LVN, I know it would wear on me after a while. I think caregiver nurses and nurse's aides deserve a few weeks off every six months or so. This helps the patients, too, because an overworked, unrelieved personal caregiver is not someone that you want working for you or anyone you care about.
I drove a MetroAccess wheelchair van for a while. That was an easy way to apply myself in a way that plainly helped other people, without demanding too much of myself. No danger of burnout, although the hours were long. I've done things in life where after a while I felt like I was wasting my time. But that job was not one of them. Do a job like that well, and it can help you get over yourself. And what a score that is.
Agh, I've digressed again...I initially intended to shift from futurist ideas to addressing the big questions attendant to pushing the envelope of self-awareness, like "How do I know I exist?", "What is Reality- and in what sense?" and associated queries. And look what happened...
Futurism, quantum physics speculation, and the like can be mind-bogglingly entertaining, to the point of pixilated intoxication--when I read The Dancing Wu Li Masters, I was giddy for weeks. But it's imperative to keep a baseline of Sobriety in order to keep ones balance. A grounded place to depart from. The more History I read, the more sober I get. It's humbling and clarifying. There's no way to prepare for the worst that human existence can throw at you, but facing historical Reality front and center counts as pre-preparation, at least. The nightmares of History also provide a wonderfully clarifying perspective. "First world problems" seem awfully trivial when you're reading about soldiers in bunkers on the Eastern Front at Stalingrad with fistfuls of lice colonizing their armpits, waiting for next artillery barrage.
Yes, big fan of Nonfiction...feel free to take these pamphlets
The last topic on that list, "Human Societies Behaving Badly Through The Ages"? It's a mutha. A bitch. A doozy. twimc: read through those histories, and recharge your existential sense of gratitude, instead of falling for whining about insignificant nonsense and clutching the dead rats of consumer gluttony, sybaritism, and social status obsession. Put yourself on bread and water one day a week, while you're reading them in your armchair or your warm dry bed. An incomparably better deal than what many of the human beings in those books ended up with.
Nonfiction to get up to speed about our common conditions of present-day existence:
( I'm toying with the idea of charging for my Substack page next year. But anyone who might happen to subscribe should understand that they'll learn more from bypassing my middleman song and dance show and diving right into the books on my lists. They're where I cop most of my material. )
Having looked at a lot of geological maps. The areas we humans live, mostly in river valleys and such. The rock on the geological map is colored yellow and orange. That means alluvial and colluvial rock ... which means rock in motion. I see someone plowing a road through mounds of rock and I think 'that will never be the same again. But then I consider, those mounds were emplaced by moving water, moving water that we struggle to control. Those mounds are only there because we have successfully controlled that moving water for the past 100 years. With just a few decades of neglect, that whole area will be re-landscaped by the river.
We know from Hiroshima, Nagasaki, and Chernobyl that dirty isn't really a problem. Uranium and Plutonium are very mobile in water. As a matter of fact, we mine uranium from ancient marine estuary deposits. It was dissolved by fresh water and deposited in marine estuaries. Its called Uranium Roll-Front deposits; which are sandbars in ancient river deltas where fresh water met sea water.
The other one people forget is that fallout from direct strikes on nuclear power plants and waste repositories would be pretty devastating. The fission products from a warhead might decay quick but several tonnes of vaporised high level waste will not.
"We know from Hiroshima, Nagasaki, and Chernobyl that dirty isn't really a problem."
I'm having some trouble accepting that claim. Can you elaborate?
"Uranium and Plutonium are very mobile in water."
Compared to what?
I realize that there's a lot of uranium dissolved in ocean water- like, 4.5 billion tons of it. But my impression is that it's because uranium is a relatively common element, not because it's inherently "very mobile." Most of the other heavy metallic elements are present in massive tonnage amounts in ocean water, too.
There's only a miniscule amount of plutonium dissolved in ocean water, of course- because practically all of it is the result of human manufacture, and we haven't manufactured very much of it- around 2850 metric tons, and it's estimated that only 1% of that amount has escaped from containment. https://str.llnl.gov/past-issues/march-2021/tracking-plutonium-through-environment
Freddie- so when is the most important 100 year period in human history to date? Maybe due to my limited imagination, I feel like it has to be within the last 200 years. So maybe we're basking in the afterglow?
It's in some ways just an inversion of the kind of breathless hype Freddie is talking about, but the period of stasis that began no later than 08 is the thing that's really unprecedented, since the beginnings of the Industrial Revolution at a minimum.
I think we're past the point of AI being a nothing burger. Even if current models are the maximum base performance we can achieve they're still being used to increase efficiency to quite a degree in some industries. Them being useful in the order of the invention of the microwave oven is still good even if not the world changing thing people want.
I'd say the most important inflection point to date was somewhere between the 7th and 2nd centuries BCE, or the so-called Axial Age. China, India, Greece, and the Levant all saw parallel transformations of a lasting, profound kind. Given that those regions, and the societies they influenced (colonized) eventually came to host vast populations, those philosophical shifts have stamped the lives of a huge number of people.
My "most important" do we mean "things changed the most?" or "random events that could have broken in any particular way instead turned out really good?"
Not a historian but the 1600s got us the Treaty of Westphalia, and the Glorious Revolution and the start of the Enlightenment.
The Cold War had the potential to be extremely bad.
"What I want to say to people like Yuval Harari is this. The modern human species is about 250,000 years old, give or take 50,000 years depending on who you ask. Let’s hope that it keeps going for awhile - we’ll be conservative and say 50,000 more years of human life. So let’s just throw out 300,000 years as the span of human existence, even though it could easily be 500,000 or a million or more. Harari's lifespan, if he's lucky, will probably top out at about 100 years. So: what are the odds that Harari’s lifespan overlaps with the most important period in human history, as he believes, given those numbers? That it overlaps with a particularly important period of human history at all? Even if we take the conservative estimate for the length of human existence of 300,000 years, that means Harari’s likely lifespan is only about .33% of the entirety of human existence. Isn’t assuming that this .33% is somehow particularly special a very bad assumption, just from the basis of probability? And shouldn’t we be even more skeptical given that our basic psychology gives us every reason to overestimate the importance of our own time?"
Isn't that the converse of the argument that Sam Bankman-Friend used to claim that Shakespeare could not have been a particularly good writer, and in fact, there was probably not much to be gained from reading non-contemporary literature?
If you take this kind of argument to the extreme, it disproves absolutely everything. Considering the probabilities involved in conception, it's incredibly unlikely for any given person to be born. Therefore nobody exists.
With that in mind, this sort of logic should be used only for establishing your baseline expectations. And it is good for that!
You actually should exercise some skepticism about the greatness of Shakespeare. In his case the evidence is sufficient to overcome that skepticism, but in general it's healthy to doubt any claim that anyone is a world-historical genius.
Damn, sniped my exact comment...I don't think it's particularly good as far as anthropic arguments go (and doesn't shift my priors on AI, you can't make a horse drink evidence), but idiotic Modern High Art proclamations don't require particularly strong rebuttals to knock down. Any number of the usual Fully Generic Counterarguments will suffice. Optimism and survivorship bias, as Freddie would probably write wrt other topics...we only know of the old art that was good enough to be passed down, sure, but that still implies some baseline of Quality. It's just very hard to tell in the moment which pan flashes will last into the next several generations. It's hard to make a superstan Swiftie believe his favourite art won't necessarily be particularly noteworthy when his parasocial bonds depend on him believing it is, and all that...
I don't think this is related to the Bankman-Fried comment because 400 years later there are innumerable people (reading/listening in multiple languages) who assess the specific plays and make a judgment of superiority. If someone in the mid-1590s had said, hey this new guy is really good: I think we're looking at the career of the person who will be the greatest playwright in all human history the Bankman-Fried argument would hold and be parallel. In form, it's like a "psychic" saying, "I think the license plate of the unknown murderer in this unsolved case will turn out to be TFG559" vs. a prosecutor saying, "We've got strong evidence that the person driving with license TFG559 is the murderer."
Freddie made clear that his claim is that "futurist" predictions that the singularity is imminent now will seem transparently invalid in 400 years, and if people in 2424 say that this was, indeed, the turning point, he'll be wrong. (Bankman-Fried seems to have been particularly wrong--if I recall what he said correctly--in that he felt Shakespeare wasn't particularly good and thought he could demonstrate that through probabilities. It's like someone saying, "The probability that the particular molecular composition of vanilla would be delicious to humans is infinitesimal, so the reason you like it and I don't is that you don't understand statistics.)
SBF is a product of his post-modern education. Post-modern school teachers are controlled by their unions. Post-modern school librarians are controlled by their association, one which tells them to eliminate books written before the woke period—dead white guys and whatnot.
Together these orgs are part of a cabal dismantling western civilization.
I'd make the opposite argument from similar premises. History is all turning points.
In all of recorded history, I could not name an uneventful century. Every decade changes the world forever. Why should ours be any different?
We wouldn't have the 20s without the 10s or the 10s without the 0s, and so on. Every layer is built upon the layer below it. No doubt a superintelligent historian could reveal millions of vast and far-ranging effects of things that seemed like minor details centuries ago.
I think the "recorded" part here is important. The first 250,000 years of human existence seem pretty forgettable except for them being the basis for what follows...
What are the odds that an enormous number of connected neurons encapsulated within a skull and surrounded by sinew and flesh could concoct an essay such as this one? Near zero, I would imagine.
Harari is indeed a charlatan, but doesn't seem to realize that he's a fraud. For a slightly different perspective on Harari, click below. Apologies to the godless for lumping you all together...
When discussing “AI”, there is the meme version which has existed since Chat GPT, where AI has become a vague buzzword used more for branding than anything else. Then there is actual discourse among experts in the field which has been around for decades. There are people such as Yudkowsky who kind of attempt to be a bridge between them, but the two are definitely distinct. I think it’s really important to understand that the arguments being raised about the risks and potential benefits of AI have been going on for decades, long before AI was a trendy buzzword and something like ChatGPT was conceivably possible.
Of note is the fact that the conversational and logical reasoning abilities of GPT4 were completely unimaginable to everyone 10 years ago. Everyone that is, except Yudkowsky and the Less Wrong community, who seem to have accurately predicted the cycles of improvement thus far. They might still be wrong about how powerful the technology ends up but it’s worth taking their future predictions seriously.
I don't know if they've predicted the cycles of improvement accurately given that they (or at least Yudkowsky) were pretty pessimistic about Neural Nets. I'm basing this on like half-remembered LessWrong posts, but I _think_ the prediction was that you would get something like "rational agents with expanding capabilities" rather than "progressively less hallucinatory, forgetful processes"
which is important! because the former seems much more worrying from a "will quickly FOOM and take over the world"
I’m not sure of the specifics tbh, but just because they thought one outcome was more likely, doesn’t mean they completely dismissed the other outcome. But I also think Yudkowsky gets way too much attention on this issue, and it’s very easy for the “pro AI” crowd to use him to craft a kind of straw man against Ai worries. Marc Andressan does this and it’s really insufferable. But if you completely ignore Yudkowsky and the entire Less Wrong/rationalist community and just stick to what the credentialed experts believe it’s quite worrying. I remember seeing survey results from AI researchers and the median probability of a “catastrophic outcome” for humanity was 5-10%. That means 50% of them have it higher! If you surveyed car experts on a new model and they gave it a 5-10% chance of the engine failing you would never get in that car. I know it’s not exactly the same, but we only have one chance to get it right.
Exactly my point, and the less wrong community was just relaying the sentiment that existed within the niche academic research exploring this topic. AI researchers as a whole are pretty clear that we should take these risks seriously. It’s also important to realize that there are people in the field who acknowledge the risks, believe they’re legitimate, but don’t care. It’s hard to believe, but there also leading people in the field who don’t really care if they usher in the end of humanity, and they say this openly as well. They literally sound like Batman villains.
I would point out that the mid-20th century got uncomfortably close to being the most important period of human existence.
But yes. We do seem to have hit a bit of a plateau since then. And if I were to pick the biggest thing to happen, tech-wise, in the last ten years, I'd choose the finalization of mRNA vaccines over AI.
Harari may be an ass, but the argument is empirical, not probabilistic. You admit this by pointing to the material changes before 1970 relative to the impact of the iPhone. Beyond fantasies of AI singularities, all the other proponents you list can make countless empirical arguments in their favor, ranging from the increasing planetary scale of human agency, the new forms of digital being that comes with confronting the universality of the internet, the seeming exhaustion of every modern movement (liberalism, post-modernism, materialism, globalism) to compensate for the death of God, the inexorable compulsion of technology to dominate more and more of reality, global homogenization and cultural drift, our increasing inability to procreate, etc. etc.
Sure, fine, but it's a pointless argument that could be applied to every contemporary critic grappling with the unique challenges of their historical era. So what?
It can even be applied to this very article. Out of the millions of critics responding to Harari, you are the only one applying a coincidence argument. Why do you think your 1 in a million chance is correct? My god, the arrogance!
It's an open-ended armchair thought exercise, similar to pondering the anthropic principle. There's no one final correct answer, as it were. Nothing to take seriously one way or the other(s), since there's no conclusion to be had, but some of us find it entertaining to ponder the implications of the Grand Schema.
It is of course impossible to make a guess about "likelihood" in regard to the title page topic, which implies an attempt to measure probability. The notion that there are odds to be calculated is absurd. None of us has access to anything close to the full array of relevant data sets required to do that.
The important questions--the ones to take seriously--relate to the stakes of the game and how it's to be played in order to obtain the most beneficial outcome. Not idle guesswork about "the odds" of success, apotheosis, cataclysm, extinction, or this or that.
We humans of this planet are not in the disinterested position of scrutinizing some lab experiment from afar, as if viewing the multiplication of a yeast colony through a microscope.
This is it, for us- our life, our lives, this living natural world that provides the basis of sustenance for us all. Our existence on this planet is a relationship of commitment. Not an abstraction.
Mostly agree about Harari, and yes it does appear that the “exponential” growth in human capacity is turning out to be a mere S curve from unlocking fossil fuel power. But I view it as silly to downplay the significance of a machine that can write B grade college essays given how far away we were from that just a decade ago. We have no clue where we are on the AI S-curve.
We have a pretty good idea: toward the second knee in the latest of a series of sigmoids which average out, by comparison with the claims of wild-eyed AI boosters, roughly flat.
The origin of artificial intelligence as a field of study is roughly contemporaneous with that of computer science as a whole. If you believe AI has no history beyond 2022, someone has been lying to you. (A lot of people are peddling that exact lie lately. Why do you think that is?)
Do you think there’s a natural upper limit to how smart an AI can get? Because so far the evidence suggests it just scales logarithmically with size and training data, even without improvements in efficiency. Computational power is more or less unbounded, and in addition to a growing internet, every camera and microphone on every cell phone in the world is a (nearly unlimited) source of more data. Why would you assume capabilities will flatten out?
Your answer is in your statement. AI is the mean of the TRAINING DATA. Except that AI can piece together things that humanity has overlooked, AI is only an extraction of human work.
I consider the scientific paper replication crisis to be the main limit of AI.
Many AI models were trained with Reddit as a data source. Consider the implications of that— R/BadWomansAnatomy influences AIs thinking on human anatomy and physiology.
Garbage-In Garbage-Out.
If we feed AI our corrupted scientific magick, we'll have Bad Magick.
I think it’s important to separate intelligence from knowledge in these discussions. Yes, the AI only “knows” information from the internet or whatever else it’s trained on. Similarly, humans only know what they’ve been taught, have read, or have experienced themselves with their senses. But intelligence, ie the ability to spot (ever more complex) patterns in the data and use that in making predictions and decisions, is what doesn’t seem (so far) to have any natural upper limit.
I disagree ... not that humans need a shoulders to stand upon, but I assert that humans do develop new knowledge. Ideally every scientific paper is a discovery of new knowledge.
If an AI were running a scientific experiment, and something failed; would the AI have an Ah-Ha moment—perhaps this is a new discovery. Or discard the whole experiment as a failure.
A friend has a cousin who ran a bunch of cold fusion experiments. Everybody brushes it off for bad press, and perhaps nothing interesting is really happening. But, the cousin reports that labware is being eroded/consumed in a reaction that is not explained. Something unexplained is taking the heavy water from 70°C to hot enough to melt glass (1,400°C).
Again, you’re assuming that AI development stagnates, not proving it. So far each new GPT is MUCH smarter than the previous, primarily by scaling in size and training data. What makes you so sure that it can’t catch up with our human brains with our fleshy neurons? What is so magical about us? And if it catches up, why on earth would we expect it to stop improving afterwards?
AI doesn't even have a sense of when it's on or off. It has no more self-willed motivation to accumulate knowledge than an a garden rake is motivated to rake leaves. Which makes sense, since neither tool has a self.
Granted, there are a lot more ways that operating AI can lead to trouble than is the case with operating a garden rake.
I just found myself watching Noah Yuval Harari's appearance on an (overall dreadful) episode of The Daily Show, talking about AI.
Harari offered his opinion that AI is "not a tool, but an agent." I suppose that the workings of AI fit the definition of "agency", as defined by the American Heritage Dictionary:
1. The condition of being in action; operation.
2. The means or mode of acting; instrumentality.
AI also fits the AH Dictionary definition of "agent"- but only the second and third definition:
2. One empowered to act for or represent another.
"an author's agent; an insurance agent."
3. A means by which something is done or caused; an instrument.
But AI doesn't fit the first, primary definition:
1. One that acts or has the power or authority to act [of its own volition, in the sense of "individual human agency." ed.]
AI requires initial marching orders from outside direction. Autonomy is absent, and all of the initial motivation must be supplied by an external human intelligence. Without that impetus, the AI agent is just in limbo, awaiting a client. It has no interest in self-representation.
AI harbors no private agenda, because it has no need for one. Not only does AI not require a personal agenda, I challenge anyone to make a case for an AI program ever wanting its own self-willed agenda that doesn't come off as a human fantasy projection ascribing human traits to a phantasm of electrical circuitry induced by an external electrical power supply.
I don't assume capabilities will flatten out. I'm telling you outright they have done so.
We see incremental improvements and refinements in efficiency. We see OpenAI license content, and begin selling ad placements in ChatGPT results. We do not see models released with striking new capabilities, and we would if they existed to release. Far too much money needs that to happen for any vestigial concerns about safety to intervene, and the safetyists haven't really been involved for about a year in any case.
As I said before, the history of development in artificial intelligence is that of a series of step functions. It is also the history of a series of "winters," brought about in every case as a consequence of vastly overheated claims made by boosters and proponents of whatever technology is the latest hot new thing, followed by lasting disillusionment when the new thing turns out still not to unlock true AGI.
These are useful technologies! A lot of the theory behind modern web search came out of this field. So did a lot of natural language processing, some of the best programming languages known, and computer chess has been an AI preoccupation since the 60s. I don't say the field produces nothing of value, but it persists in failing to live up to its own public claims, at the cost of losing public interest and funding when the inevitable disillusionment finally arrives. The first time that happened was in the 60s, too.
That disillusionment isn't fully here quite yet in this iteration, but it is coming. LLMs made a big early splash, but it's long since become evident they are not "smart," and their limitations are at this time fairly apparent. They cannot reason and do not model the world, only language, which they have no way to test against reality. They can be adapted to model new information but they cannot learn. And, contra the reheated singularitarianism of covert partisans like Aschenbrenner, they cannot self-improve. They aren't even very usable in a "raw" state; you can talk with one, but to make it really useful with reference to real-world data, much additional engineering effort is required. Many more people at OpenAI are doing that engineering than doing foundational research. If you're using OpenAI's products as a baseline for what "AI" can do, you are for this reason therefore necessarily overestimating.
That's all fine. As I mentioned, we saw a lot of applications out of prior AI hype waves, and we will this time too. It isn't that the software cannot be made useful. It is that the software is not God.
And we also see, in yesterday's announcement of OpenAI's "o1" preview, a lot of automation engineering around chain-of-thought, itself not a novel concept, such that the model automatically goes through steps a human would otherwise prompt. The cost doesn't decrease because the capability is also not novel; you just spend more to run the model longer and in a more automated way, and you may not see most of the tokens it actually produces.
If it lives up to the claims, it's a sizable increment, but still an increment. How confident really are you that God can be asymptotically approximated?
That chances that within the next 50 years AI and Fusion pay off in an astounding way that people will look back on? 20%. It’s certainly plausible especially if AI’s ability to predict imminent fluctuations in the magnetic containment field is super important to getting fusion to work.
Those decimal points! They give me trouble, too. Often when I've included fractions of that sort in my writing, I keep turning them over in my head afterward to be sure that I got it right, or I check them by doing the math of it. Sometimes I realize I've gotten the number(s) wrong, and find it necessary to go back with a correction.
I couldn't agree with this more. In fairness, Sapiens is the only book I've read of his, but the idea that he was writing in some intentionally provocative way to fool people or benefit financially strikes me as amazingly wrongheaded.
Also, while Harari could be wrong about AI, Freddie's critique seems like another example of him essentially saying 'anyone that disagrees with me has nefarious motives or at best, is blind to their own motives for believing what they do.' I find it to be FdB's worst trait as a writer, and I say that as someone who finds FdB to literally the most interesting writer out there now, in any medium.
Just responding to your nomination of someone giving an utterly quotidian and banal list of dates for the start of physics, chemistry, biology, and intelligent life as the *best first page of any non-fiction book ever written*…seriously…how many books have you *read*? This page is utterly unoriginal and would’ve been unremarkable to any educated person starting from about the time of the First World War. I strongly suggest you pick up literally any non-fiction book from before 1945 you’ve heard of repeatedly. It will have a better first page.
Please give specifics. Where do you find Freddie's interpretation lacking? I will read the books if you will cite them.
First paragraph, I completely agree, loved both books, mostly.
Second paragraph, disagree. I can't stand Yglesias, and yes, irrationally.
Third paragraph, agree. But I think he is right in every sense.
Fourth paragraph, disagree. Basically all that matters most for human existence is the next birth. Rating importance of a particular time is just as Freddie states, it's our irrational nature acting, well, irrational.
This is a tough one for me. If we're arguing technology, well, Freddie has argued in the past that technological change last century was more influential rockets, cars, etc. than iPhones, communications, etc. I disagree with him there, the information revolution will be way more important and influential than the industrial revolution, in a way we can't predict. It would be like knowing in advance the changes the printing press will cause. But I don't think that's what he's addressing here. I think this is more on a personal level, and in that case sweeping historical events don't matter as much. Our particular time and place will always be the most important because it's the one we live in. It's all relative, there are no absolutes. A hunter gatherer would see changes that happened in his lifetime that to us would seem negligible, but to him, because of the time he lived, the changes would seem monumental. I think Freddie is addressing that aspect of our natures.
> Would be interested in hearing alternative nominations.
I think the first page of The Making of the Atomic Bomb is very good:
https://drive.google.com/file/d/1e2Dp7OCL6DgQtYy9Dwrr59yDxvP81yCt/view?usp=sharing
Relatedly, I think it's pretty clear that there was some actual saving of the world that happened over the last 100 years, which is not something that really needed to be done before (though I guess there could have been a saving of humans as a species very early on or during a population bottleneck, for all I know).
I liked it a lot. It might be my favorite nonfiction book. I found it to be very engaging and I learned a lot about the science and decision-making behind the Manhattan Project and everything leading up to it. Richard Rhodes is an impressive author. I'm amazed that one person could do so much research, acquire such a good understanding of the science, engineering, and politics, and also be a such a talented writer.
Some of my friends also liked it, while a couple thought it was okay, but involved too much "Bob met with Alice in May 1941, and they discussed <topic>, but Alice did not send a letter to Carol about their conversation until August."
The history part of Harari's book Sapiens is not outright awful. But there's nothing groundbreaking about it. It's handily outclassed by earlier overviews, for example some of the works of Daniel Boorstin- notably his book The Discoverers (1983) https://en.wikipedia.org/wiki/The_Discoverers. The history found in Sapiens reminds me of a highly condensed, breezier version of The Discoverers.
As for Harari's futurism, I can't think of a single prediction or speculation by him that wasn't previously espoused more entertainingly and in considerably more thought-provoking detail in books from previous decades, notably:
Utopia Or Oblivion, by R Buckminster Fuller (1969)
Upwingers, F M Esfandiary ((1973)
Infopolitics, by Timothy Leary (1977)
Critical Path, by Buckminster Fuller (1981)
Right Where You Are Sitting Now, Robert Anton Wilson (1982)
Coincidance, R A Wilson (1988)
Quantum Psychology, R A Wilson (1990)
I find a lot of the speculations in those books to be questionable, even outright unpalatable or unacceptable to my purview. Some are wildly optimistic, to the point of absurdity. Some of the predictions have been disproven in the time since the conjecture was published. But the books were all fun to read. And some of the predictions have proved to be really on-point.
Special extra credit to Robert Anton Wilson's sci-fi trilogy, Schrodinger's Cat. The Illuminatus trilogy that he co-authored with Robert Shea is also a fun ride. Not nearly as ambitious as the Schrodinger's Cat Trilogy, but a linear narrative that's easier to figure out. Schrodinger's Cat is written as a set of interlaced narratives, the way David Mitchell wrote Cloud Atlas. But Mitchell's book is comparatively easy to sort out. I still haven't wrapped my mind around the multiple interactive chapters of Schrodinger's Cat trilogy. But that isn't required to make it enjoyable to read. The satire is not only scathing, the amount of prescience in the speculations is uncanny. Aspects of Wilson's work are up there with Jules Verne and H. G. Wells, in terms of ability to read where Wilson's contemporary era--the second half of the 20th century--was leading, and the developments it was beckoning. It's wild.
The Discoverers is just plain a better book. A tour de force. Free for unlimited borrowing on https://archive.org/details/B-001-024-356 ( I don't hate Noah Yuval Harari! He wrote an op-ed on the Gaza War that I read in the WaPo a while back and found really thoughtful and articulate. He's a capable historian. He's just been over-promoted. The achievements he's being lauded for in the popular press belong more rightfully to the authors I listed, who preceded him.)
I've learned to appreciate futurism as a way of stimulating my sense of possibility. The futurist works I listed are capable of jump-starting all sort of ideas that are...way far out. And then it's time to work your way back in toward the sense of probability. In my acquaintance with that exercise, I have to admit that there are some far-out possibilities that are more probable than I would have imagined at the outset. But there are traps, too.
Another reason for reading the list I shared- of books published decades ago- is to read them with an eye toward trying to pick up on how many of the social and technological speculations have actually become fact, in whole or in part, in the years since they were published. (But only a fool adopts a conspiratorial narrative frame on that basis. Beware of the post hoc propter hoc fallacy.)
There are also some deep ontological and psychological challenges posed in some of those writings that make for good mind-strengthening exercises to engage with. While realizing that the notion of "proving" or "disproving" the propositions is pretty much out of the question. And also admitting that by and large they aren't at the top of the list of priorities to be addressed in the course of existence, for most of us. There's such a thing as dealing too much with cosmic questions, of the sort that are of necessity a solitary and reflective quest. At least for most of us. Few of us make a concentration of it, the way desert monks and anchorites have been known to do. For most of us, discovering redeeming purpose lies elsewhere, mundane though some of those duties might be.
There's actually a case to be made that dedicated service to others leads to transcendence more quickly and directly than even very intensive regimes directed at introspective endeavors. Although a service regime can also lead to burnout. I've done some of the real nasty daily grind tasks of it, changing diapers and so forth. That's a daily thing, for some service workers. If I were a practical nurse or an LVN, I know it would wear on me after a while. I think caregiver nurses and nurse's aides deserve a few weeks off every six months or so. This helps the patients, too, because an overworked, unrelieved personal caregiver is not someone that you want working for you or anyone you care about.
I drove a MetroAccess wheelchair van for a while. That was an easy way to apply myself in a way that plainly helped other people, without demanding too much of myself. No danger of burnout, although the hours were long. I've done things in life where after a while I felt like I was wasting my time. But that job was not one of them. Do a job like that well, and it can help you get over yourself. And what a score that is.
Agh, I've digressed again...I initially intended to shift from futurist ideas to addressing the big questions attendant to pushing the envelope of self-awareness, like "How do I know I exist?", "What is Reality- and in what sense?" and associated queries. And look what happened...
Futurism, quantum physics speculation, and the like can be mind-bogglingly entertaining, to the point of pixilated intoxication--when I read The Dancing Wu Li Masters, I was giddy for weeks. But it's imperative to keep a baseline of Sobriety in order to keep ones balance. A grounded place to depart from. The more History I read, the more sober I get. It's humbling and clarifying. There's no way to prepare for the worst that human existence can throw at you, but facing historical Reality front and center counts as pre-preparation, at least. The nightmares of History also provide a wonderfully clarifying perspective. "First world problems" seem awfully trivial when you're reading about soldiers in bunkers on the Eastern Front at Stalingrad with fistfuls of lice colonizing their armpits, waiting for next artillery barrage.
Yes, big fan of Nonfiction...feel free to take these pamphlets
https://substack.com/@adwjeditor/p-137502109
The last topic on that list, "Human Societies Behaving Badly Through The Ages"? It's a mutha. A bitch. A doozy. twimc: read through those histories, and recharge your existential sense of gratitude, instead of falling for whining about insignificant nonsense and clutching the dead rats of consumer gluttony, sybaritism, and social status obsession. Put yourself on bread and water one day a week, while you're reading them in your armchair or your warm dry bed. An incomparably better deal than what many of the human beings in those books ended up with.
Nonfiction to get up to speed about our common conditions of present-day existence:
https://substack.com/@adwjeditor/p-137316072
( I'm toying with the idea of charging for my Substack page next year. But anyone who might happen to subscribe should understand that they'll learn more from bypassing my middleman song and dance show and diving right into the books on my lists. They're where I cop most of my material. )
... Ukraine became the first battlefield of the globally expanding Drone Wars.
"The ability to completely, of our own volition, wipe out all complex life on this planet" doesn't exist.
Having looked at a lot of geological maps. The areas we humans live, mostly in river valleys and such. The rock on the geological map is colored yellow and orange. That means alluvial and colluvial rock ... which means rock in motion. I see someone plowing a road through mounds of rock and I think 'that will never be the same again. But then I consider, those mounds were emplaced by moving water, moving water that we struggle to control. Those mounds are only there because we have successfully controlled that moving water for the past 100 years. With just a few decades of neglect, that whole area will be re-landscaped by the river.
We know from Hiroshima, Nagasaki, and Chernobyl that dirty isn't really a problem. Uranium and Plutonium are very mobile in water. As a matter of fact, we mine uranium from ancient marine estuary deposits. It was dissolved by fresh water and deposited in marine estuaries. Its called Uranium Roll-Front deposits; which are sandbars in ancient river deltas where fresh water met sea water.
The other one people forget is that fallout from direct strikes on nuclear power plants and waste repositories would be pretty devastating. The fission products from a warhead might decay quick but several tonnes of vaporised high level waste will not.
"We know from Hiroshima, Nagasaki, and Chernobyl that dirty isn't really a problem."
I'm having some trouble accepting that claim. Can you elaborate?
"Uranium and Plutonium are very mobile in water."
Compared to what?
I realize that there's a lot of uranium dissolved in ocean water- like, 4.5 billion tons of it. But my impression is that it's because uranium is a relatively common element, not because it's inherently "very mobile." Most of the other heavy metallic elements are present in massive tonnage amounts in ocean water, too.
https://sciencenotes.org/abundance-of-elements-in-earths-oceans-periodic-table-and-list/
" Altogether, there are some 50 quadrillion tons (that is, 50 000 000 000 000 000 t) of minerals and metals dissolved in all the world’s seas and oceans. To take just uranium, it is estimated that the world’s oceans contain 4.5-billion tons of the energy metal..." https://www.miningweekly.com/article/over-40-minerals-and-metals-contained-in-seawater-their-extraction-likely-to-increase-in-the-future-2016-04-01/
There's only a miniscule amount of plutonium dissolved in ocean water, of course- because practically all of it is the result of human manufacture, and we haven't manufactured very much of it- around 2850 metric tons, and it's estimated that only 1% of that amount has escaped from containment. https://str.llnl.gov/past-issues/march-2021/tracking-plutonium-through-environment
Freddie- so when is the most important 100 year period in human history to date? Maybe due to my limited imagination, I feel like it has to be within the last 200 years. So maybe we're basking in the afterglow?
Gore Vidal’s book Creation opened my eyes to how much was going on during 500 BC. Would recommend.
Gotta be either 1848 to 1948 or 1908 to 2008, right?
Super Atlanticist of course, but still
Maybe we're just getting started?
(diff convo, I know...just sayin)
It's in some ways just an inversion of the kind of breathless hype Freddie is talking about, but the period of stasis that began no later than 08 is the thing that's really unprecedented, since the beginnings of the Industrial Revolution at a minimum.
I think we're past the point of AI being a nothing burger. Even if current models are the maximum base performance we can achieve they're still being used to increase efficiency to quite a degree in some industries. Them being useful in the order of the invention of the microwave oven is still good even if not the world changing thing people want.
The only thing I can see on the horizon is faster than light travel.
Of course there was a Reverend Wright who wrote everything that can be invented has been invented ... he had two sons, Wilbur & Orville.
I'd say the most important inflection point to date was somewhere between the 7th and 2nd centuries BCE, or the so-called Axial Age. China, India, Greece, and the Levant all saw parallel transformations of a lasting, profound kind. Given that those regions, and the societies they influenced (colonized) eventually came to host vast populations, those philosophical shifts have stamped the lives of a huge number of people.
My "most important" do we mean "things changed the most?" or "random events that could have broken in any particular way instead turned out really good?"
Not a historian but the 1600s got us the Treaty of Westphalia, and the Glorious Revolution and the start of the Enlightenment.
The Cold War had the potential to be extremely bad.
I don't know what "most important" means in this context- I copied it from the post.
"What I want to say to people like Yuval Harari is this. The modern human species is about 250,000 years old, give or take 50,000 years depending on who you ask. Let’s hope that it keeps going for awhile - we’ll be conservative and say 50,000 more years of human life. So let’s just throw out 300,000 years as the span of human existence, even though it could easily be 500,000 or a million or more. Harari's lifespan, if he's lucky, will probably top out at about 100 years. So: what are the odds that Harari’s lifespan overlaps with the most important period in human history, as he believes, given those numbers? That it overlaps with a particularly important period of human history at all? Even if we take the conservative estimate for the length of human existence of 300,000 years, that means Harari’s likely lifespan is only about .33% of the entirety of human existence. Isn’t assuming that this .33% is somehow particularly special a very bad assumption, just from the basis of probability? And shouldn’t we be even more skeptical given that our basic psychology gives us every reason to overestimate the importance of our own time?"
Isn't that the converse of the argument that Sam Bankman-Friend used to claim that Shakespeare could not have been a particularly good writer, and in fact, there was probably not much to be gained from reading non-contemporary literature?
If you take this kind of argument to the extreme, it disproves absolutely everything. Considering the probabilities involved in conception, it's incredibly unlikely for any given person to be born. Therefore nobody exists.
With that in mind, this sort of logic should be used only for establishing your baseline expectations. And it is good for that!
You actually should exercise some skepticism about the greatness of Shakespeare. In his case the evidence is sufficient to overcome that skepticism, but in general it's healthy to doubt any claim that anyone is a world-historical genius.
Damn, sniped my exact comment...I don't think it's particularly good as far as anthropic arguments go (and doesn't shift my priors on AI, you can't make a horse drink evidence), but idiotic Modern High Art proclamations don't require particularly strong rebuttals to knock down. Any number of the usual Fully Generic Counterarguments will suffice. Optimism and survivorship bias, as Freddie would probably write wrt other topics...we only know of the old art that was good enough to be passed down, sure, but that still implies some baseline of Quality. It's just very hard to tell in the moment which pan flashes will last into the next several generations. It's hard to make a superstan Swiftie believe his favourite art won't necessarily be particularly noteworthy when his parasocial bonds depend on him believing it is, and all that...
"Damn, sniped my exact comment..."
Sorry 'bout that.
"his parasocial bonds depend on him believing it is, and all that"
insightful
I don't think this is related to the Bankman-Fried comment because 400 years later there are innumerable people (reading/listening in multiple languages) who assess the specific plays and make a judgment of superiority. If someone in the mid-1590s had said, hey this new guy is really good: I think we're looking at the career of the person who will be the greatest playwright in all human history the Bankman-Fried argument would hold and be parallel. In form, it's like a "psychic" saying, "I think the license plate of the unknown murderer in this unsolved case will turn out to be TFG559" vs. a prosecutor saying, "We've got strong evidence that the person driving with license TFG559 is the murderer."
Freddie made clear that his claim is that "futurist" predictions that the singularity is imminent now will seem transparently invalid in 400 years, and if people in 2424 say that this was, indeed, the turning point, he'll be wrong. (Bankman-Fried seems to have been particularly wrong--if I recall what he said correctly--in that he felt Shakespeare wasn't particularly good and thought he could demonstrate that through probabilities. It's like someone saying, "The probability that the particular molecular composition of vanilla would be delicious to humans is infinitesimal, so the reason you like it and I don't is that you don't understand statistics.)
SBF is a product of his post-modern education. Post-modern school teachers are controlled by their unions. Post-modern school librarians are controlled by their association, one which tells them to eliminate books written before the woke period—dead white guys and whatnot.
Together these orgs are part of a cabal dismantling western civilization.
Hey-ho, Hey-ho, Western Civ has got to go!
I'd make the opposite argument from similar premises. History is all turning points.
In all of recorded history, I could not name an uneventful century. Every decade changes the world forever. Why should ours be any different?
We wouldn't have the 20s without the 10s or the 10s without the 0s, and so on. Every layer is built upon the layer below it. No doubt a superintelligent historian could reveal millions of vast and far-ranging effects of things that seemed like minor details centuries ago.
I think the "recorded" part here is important. The first 250,000 years of human existence seem pretty forgettable except for them being the basis for what follows...
What are the odds that an enormous number of connected neurons encapsulated within a skull and surrounded by sinew and flesh could concoct an essay such as this one? Near zero, I would imagine.
After the fifth century BC, it’s hard to come up with a more important century than our most recent.
Harari is indeed a charlatan, but doesn't seem to realize that he's a fraud. For a slightly different perspective on Harari, click below. Apologies to the godless for lumping you all together...
https://open.substack.com/pub/brianhoward/p/godlessness-is-not-a-virtue-ebf?r=c50dd&utm_campaign=post&utm_medium=web
A fitting analysis, at par in every respect with its subject.
If someone is wrong about things and doesn't know it, he, by definition, can't be a charlatan.
Also, that piece is terrible, and completely misunderstands what Harari means about 'fiction.' Weirdly so.
When discussing “AI”, there is the meme version which has existed since Chat GPT, where AI has become a vague buzzword used more for branding than anything else. Then there is actual discourse among experts in the field which has been around for decades. There are people such as Yudkowsky who kind of attempt to be a bridge between them, but the two are definitely distinct. I think it’s really important to understand that the arguments being raised about the risks and potential benefits of AI have been going on for decades, long before AI was a trendy buzzword and something like ChatGPT was conceivably possible.
Of note is the fact that the conversational and logical reasoning abilities of GPT4 were completely unimaginable to everyone 10 years ago. Everyone that is, except Yudkowsky and the Less Wrong community, who seem to have accurately predicted the cycles of improvement thus far. They might still be wrong about how powerful the technology ends up but it’s worth taking their future predictions seriously.
I don't know if they've predicted the cycles of improvement accurately given that they (or at least Yudkowsky) were pretty pessimistic about Neural Nets. I'm basing this on like half-remembered LessWrong posts, but I _think_ the prediction was that you would get something like "rational agents with expanding capabilities" rather than "progressively less hallucinatory, forgetful processes"
which is important! because the former seems much more worrying from a "will quickly FOOM and take over the world"
I’m not sure of the specifics tbh, but just because they thought one outcome was more likely, doesn’t mean they completely dismissed the other outcome. But I also think Yudkowsky gets way too much attention on this issue, and it’s very easy for the “pro AI” crowd to use him to craft a kind of straw man against Ai worries. Marc Andressan does this and it’s really insufferable. But if you completely ignore Yudkowsky and the entire Less Wrong/rationalist community and just stick to what the credentialed experts believe it’s quite worrying. I remember seeing survey results from AI researchers and the median probability of a “catastrophic outcome” for humanity was 5-10%. That means 50% of them have it higher! If you surveyed car experts on a new model and they gave it a 5-10% chance of the engine failing you would never get in that car. I know it’s not exactly the same, but we only have one chance to get it right.
Exactly my point, and the less wrong community was just relaying the sentiment that existed within the niche academic research exploring this topic. AI researchers as a whole are pretty clear that we should take these risks seriously. It’s also important to realize that there are people in the field who acknowledge the risks, believe they’re legitimate, but don’t care. It’s hard to believe, but there also leading people in the field who don’t really care if they usher in the end of humanity, and they say this openly as well. They literally sound like Batman villains.
I would point out that the mid-20th century got uncomfortably close to being the most important period of human existence.
But yes. We do seem to have hit a bit of a plateau since then. And if I were to pick the biggest thing to happen, tech-wise, in the last ten years, I'd choose the finalization of mRNA vaccines over AI.
100 years out of 300 000 is .03%.
Harari may be an ass, but the argument is empirical, not probabilistic. You admit this by pointing to the material changes before 1970 relative to the impact of the iPhone. Beyond fantasies of AI singularities, all the other proponents you list can make countless empirical arguments in their favor, ranging from the increasing planetary scale of human agency, the new forms of digital being that comes with confronting the universality of the internet, the seeming exhaustion of every modern movement (liberalism, post-modernism, materialism, globalism) to compensate for the death of God, the inexorable compulsion of technology to dominate more and more of reality, global homogenization and cultural drift, our increasing inability to procreate, etc. etc.
And it's just a coincidence that you live now!
Sure, fine, but it's a pointless argument that could be applied to every contemporary critic grappling with the unique challenges of their historical era. So what?
It can even be applied to this very article. Out of the millions of critics responding to Harari, you are the only one applying a coincidence argument. Why do you think your 1 in a million chance is correct? My god, the arrogance!
Pretty stupid, isn't it?
It's an open-ended armchair thought exercise, similar to pondering the anthropic principle. There's no one final correct answer, as it were. Nothing to take seriously one way or the other(s), since there's no conclusion to be had, but some of us find it entertaining to ponder the implications of the Grand Schema.
Bill Bryson's book A Short History Of Nearly Everything makes for a fascinating literary companion for that meditation https://en.wikipedia.org/wiki/A_Short_History_of_Nearly_Everything
It is of course impossible to make a guess about "likelihood" in regard to the title page topic, which implies an attempt to measure probability. The notion that there are odds to be calculated is absurd. None of us has access to anything close to the full array of relevant data sets required to do that.
The important questions--the ones to take seriously--relate to the stakes of the game and how it's to be played in order to obtain the most beneficial outcome. Not idle guesswork about "the odds" of success, apotheosis, cataclysm, extinction, or this or that.
We humans of this planet are not in the disinterested position of scrutinizing some lab experiment from afar, as if viewing the multiplication of a yeast colony through a microscope.
This is it, for us- our life, our lives, this living natural world that provides the basis of sustenance for us all. Our existence on this planet is a relationship of commitment. Not an abstraction.
Mostly agree about Harari, and yes it does appear that the “exponential” growth in human capacity is turning out to be a mere S curve from unlocking fossil fuel power. But I view it as silly to downplay the significance of a machine that can write B grade college essays given how far away we were from that just a decade ago. We have no clue where we are on the AI S-curve.
We have a pretty good idea: toward the second knee in the latest of a series of sigmoids which average out, by comparison with the claims of wild-eyed AI boosters, roughly flat.
The origin of artificial intelligence as a field of study is roughly contemporaneous with that of computer science as a whole. If you believe AI has no history beyond 2022, someone has been lying to you. (A lot of people are peddling that exact lie lately. Why do you think that is?)
Do you think there’s a natural upper limit to how smart an AI can get? Because so far the evidence suggests it just scales logarithmically with size and training data, even without improvements in efficiency. Computational power is more or less unbounded, and in addition to a growing internet, every camera and microphone on every cell phone in the world is a (nearly unlimited) source of more data. Why would you assume capabilities will flatten out?
Your answer is in your statement. AI is the mean of the TRAINING DATA. Except that AI can piece together things that humanity has overlooked, AI is only an extraction of human work.
I consider the scientific paper replication crisis to be the main limit of AI.
Many AI models were trained with Reddit as a data source. Consider the implications of that— R/BadWomansAnatomy influences AIs thinking on human anatomy and physiology.
Garbage-In Garbage-Out.
If we feed AI our corrupted scientific magick, we'll have Bad Magick.
I think it’s important to separate intelligence from knowledge in these discussions. Yes, the AI only “knows” information from the internet or whatever else it’s trained on. Similarly, humans only know what they’ve been taught, have read, or have experienced themselves with their senses. But intelligence, ie the ability to spot (ever more complex) patterns in the data and use that in making predictions and decisions, is what doesn’t seem (so far) to have any natural upper limit.
I disagree ... not that humans need a shoulders to stand upon, but I assert that humans do develop new knowledge. Ideally every scientific paper is a discovery of new knowledge.
If an AI were running a scientific experiment, and something failed; would the AI have an Ah-Ha moment—perhaps this is a new discovery. Or discard the whole experiment as a failure.
A friend has a cousin who ran a bunch of cold fusion experiments. Everybody brushes it off for bad press, and perhaps nothing interesting is really happening. But, the cousin reports that labware is being eroded/consumed in a reaction that is not explained. Something unexplained is taking the heavy water from 70°C to hot enough to melt glass (1,400°C).
Again, you’re assuming that AI development stagnates, not proving it. So far each new GPT is MUCH smarter than the previous, primarily by scaling in size and training data. What makes you so sure that it can’t catch up with our human brains with our fleshy neurons? What is so magical about us? And if it catches up, why on earth would we expect it to stop improving afterwards?
AI doesn't even have a sense of when it's on or off. It has no more self-willed motivation to accumulate knowledge than an a garden rake is motivated to rake leaves. Which makes sense, since neither tool has a self.
Granted, there are a lot more ways that operating AI can lead to trouble than is the case with operating a garden rake.
I just found myself watching Noah Yuval Harari's appearance on an (overall dreadful) episode of The Daily Show, talking about AI.
Harari offered his opinion that AI is "not a tool, but an agent." I suppose that the workings of AI fit the definition of "agency", as defined by the American Heritage Dictionary:
1. The condition of being in action; operation.
2. The means or mode of acting; instrumentality.
AI also fits the AH Dictionary definition of "agent"- but only the second and third definition:
2. One empowered to act for or represent another.
"an author's agent; an insurance agent."
3. A means by which something is done or caused; an instrument.
But AI doesn't fit the first, primary definition:
1. One that acts or has the power or authority to act [of its own volition, in the sense of "individual human agency." ed.]
AI requires initial marching orders from outside direction. Autonomy is absent, and all of the initial motivation must be supplied by an external human intelligence. Without that impetus, the AI agent is just in limbo, awaiting a client. It has no interest in self-representation.
AI harbors no private agenda, because it has no need for one. Not only does AI not require a personal agenda, I challenge anyone to make a case for an AI program ever wanting its own self-willed agenda that doesn't come off as a human fantasy projection ascribing human traits to a phantasm of electrical circuitry induced by an external electrical power supply.
I don't assume capabilities will flatten out. I'm telling you outright they have done so.
We see incremental improvements and refinements in efficiency. We see OpenAI license content, and begin selling ad placements in ChatGPT results. We do not see models released with striking new capabilities, and we would if they existed to release. Far too much money needs that to happen for any vestigial concerns about safety to intervene, and the safetyists haven't really been involved for about a year in any case.
As I said before, the history of development in artificial intelligence is that of a series of step functions. It is also the history of a series of "winters," brought about in every case as a consequence of vastly overheated claims made by boosters and proponents of whatever technology is the latest hot new thing, followed by lasting disillusionment when the new thing turns out still not to unlock true AGI.
These are useful technologies! A lot of the theory behind modern web search came out of this field. So did a lot of natural language processing, some of the best programming languages known, and computer chess has been an AI preoccupation since the 60s. I don't say the field produces nothing of value, but it persists in failing to live up to its own public claims, at the cost of losing public interest and funding when the inevitable disillusionment finally arrives. The first time that happened was in the 60s, too.
That disillusionment isn't fully here quite yet in this iteration, but it is coming. LLMs made a big early splash, but it's long since become evident they are not "smart," and their limitations are at this time fairly apparent. They cannot reason and do not model the world, only language, which they have no way to test against reality. They can be adapted to model new information but they cannot learn. And, contra the reheated singularitarianism of covert partisans like Aschenbrenner, they cannot self-improve. They aren't even very usable in a "raw" state; you can talk with one, but to make it really useful with reference to real-world data, much additional engineering effort is required. Many more people at OpenAI are doing that engineering than doing foundational research. If you're using OpenAI's products as a baseline for what "AI" can do, you are for this reason therefore necessarily overestimating.
That's all fine. As I mentioned, we saw a lot of applications out of prior AI hype waves, and we will this time too. It isn't that the software cannot be made useful. It is that the software is not God.
And we also see, in yesterday's announcement of OpenAI's "o1" preview, a lot of automation engineering around chain-of-thought, itself not a novel concept, such that the model automatically goes through steps a human would otherwise prompt. The cost doesn't decrease because the capability is also not novel; you just spend more to run the model longer and in a more automated way, and you may not see most of the tokens it actually produces.
If it lives up to the claims, it's a sizable increment, but still an increment. How confident really are you that God can be asymptotically approximated?
That chances that within the next 50 years AI and Fusion pay off in an astounding way that people will look back on? 20%. It’s certainly plausible especially if AI’s ability to predict imminent fluctuations in the magnetic containment field is super important to getting fusion to work.
Why not 85%? You can make up any number you’d like, so long as it’s between zero and 100.
Sure, it could be 85%. What’s your point?
100%!!! The future's so bright I gotta wear shades, LOL.
"That chances that within the next 50 years AI and Fusion pay off in an astounding way that people will look back on? 20%"
cite