173 Comments

Freddie- so when is the most important 100 year period in human history to date? Maybe due to my limited imagination, I feel like it has to be within the last 200 years. So maybe we're basking in the afterglow?

Expand full comment

Gore Vidal’s book Creation opened my eyes to how much was going on during 500 BC. Would recommend.

Expand full comment
Sep 6·edited Sep 6

Gotta be either 1848 to 1948 or 1908 to 2008, right?

Super Atlanticist of course, but still

Expand full comment

Maybe we're just getting started?

(diff convo, I know...just sayin)

Expand full comment

It's in some ways just an inversion of the kind of breathless hype Freddie is talking about, but the period of stasis that began no later than 08 is the thing that's really unprecedented, since the beginnings of the Industrial Revolution at a minimum.

Expand full comment

Sjellic: When you say "the period of stasis that began no later than 08" are you talking about 2008 until today? Even if AI turns out to be a nothingburger, I'm not sure the last 16 years have been so static. Social media, COVID, the sharing economy, and advancements in biotech/solar/robotics all strike me as fairly significant.

Agree that they don't quite compare to electricity, airplanes, vaccines, WWI/II, modern plumbing, refrigeration, television, and the web, but it's only been 30 years since the last of those.

Expand full comment

I think we're past the point of AI being a nothing burger. Even if current models are the maximum base performance we can achieve they're still being used to increase efficiency to quite a degree in some industries. Them being useful in the order of the invention of the microwave oven is still good even if not the world changing thing people want.

Expand full comment

The only thing I can see on the horizon is faster than light travel.

Of course there was a Reverend Wright who wrote everything that can be invented has been invented ... he had two sons, Wilbur & Orville.

Expand full comment

I'd say the most important inflection point to date was somewhere between the 7th and 2nd centuries BCE, or the so-called Axial Age. China, India, Greece, and the Levant all saw parallel transformations of a lasting, profound kind. Given that those regions, and the societies they influenced (colonized) eventually came to host vast populations, those philosophical shifts have stamped the lives of a huge number of people.

Expand full comment

My "most important" do we mean "things changed the most?" or "random events that could have broken in any particular way instead turned out really good?"

Not a historian but the 1600s got us the Treaty of Westphalia, and the Glorious Revolution and the start of the Enlightenment.

The Cold War had the potential to be extremely bad.

Expand full comment

I don't know what "most important" means in this context- I copied it from the post.

Expand full comment

"What I want to say to people like Yuval Harari is this. The modern human species is about 250,000 years old, give or take 50,000 years depending on who you ask. Let’s hope that it keeps going for awhile - we’ll be conservative and say 50,000 more years of human life. So let’s just throw out 300,000 years as the span of human existence, even though it could easily be 500,000 or a million or more. Harari's lifespan, if he's lucky, will probably top out at about 100 years. So: what are the odds that Harari’s lifespan overlaps with the most important period in human history, as he believes, given those numbers? That it overlaps with a particularly important period of human history at all? Even if we take the conservative estimate for the length of human existence of 300,000 years, that means Harari’s likely lifespan is only about .33% of the entirety of human existence. Isn’t assuming that this .33% is somehow particularly special a very bad assumption, just from the basis of probability? And shouldn’t we be even more skeptical given that our basic psychology gives us every reason to overestimate the importance of our own time?"

Isn't that the converse of the argument that Sam Bankman-Friend used to claim that Shakespeare could not have been a particularly good writer, and in fact, there was probably not much to be gained from reading non-contemporary literature?

Expand full comment

If you take this kind of argument to the extreme, it disproves absolutely everything. Considering the probabilities involved in conception, it's incredibly unlikely for any given person to be born. Therefore nobody exists.

With that in mind, this sort of logic should be used only for establishing your baseline expectations. And it is good for that!

You actually should exercise some skepticism about the greatness of Shakespeare. In his case the evidence is sufficient to overcome that skepticism, but in general it's healthy to doubt any claim that anyone is a world-historical genius.

Expand full comment

Damn, sniped my exact comment...I don't think it's particularly good as far as anthropic arguments go (and doesn't shift my priors on AI, you can't make a horse drink evidence), but idiotic Modern High Art proclamations don't require particularly strong rebuttals to knock down. Any number of the usual Fully Generic Counterarguments will suffice. Optimism and survivorship bias, as Freddie would probably write wrt other topics...we only know of the old art that was good enough to be passed down, sure, but that still implies some baseline of Quality. It's just very hard to tell in the moment which pan flashes will last into the next several generations. It's hard to make a superstan Swiftie believe his favourite art won't necessarily be particularly noteworthy when his parasocial bonds depend on him believing it is, and all that...

Expand full comment

"Damn, sniped my exact comment..."

Sorry 'bout that.

Expand full comment

"his parasocial bonds depend on him believing it is, and all that"

insightful

Expand full comment

I don't think this is related to the Bankman-Fried comment because 400 years later there are innumerable people (reading/listening in multiple languages) who assess the specific plays and make a judgment of superiority. If someone in the mid-1590s had said, hey this new guy is really good: I think we're looking at the career of the person who will be the greatest playwright in all human history the Bankman-Fried argument would hold and be parallel. In form, it's like a "psychic" saying, "I think the license plate of the unknown murderer in this unsolved case will turn out to be TFG559" vs. a prosecutor saying, "We've got strong evidence that the person driving with license TFG559 is the murderer."

Freddie made clear that his claim is that "futurist" predictions that the singularity is imminent now will seem transparently invalid in 400 years, and if people in 2424 say that this was, indeed, the turning point, he'll be wrong. (Bankman-Fried seems to have been particularly wrong--if I recall what he said correctly--in that he felt Shakespeare wasn't particularly good and thought he could demonstrate that through probabilities. It's like someone saying, "The probability that the particular molecular composition of vanilla would be delicious to humans is infinitesimal, so the reason you like it and I don't is that you don't understand statistics.)

Expand full comment

SBF is a product of his post-modern education. Post-modern school teachers are controlled by their unions. Post-modern school librarians are controlled by their association, one which tells them to eliminate books written before the woke period—dead white guys and whatnot.

Together these orgs are part of a cabal dismantling western civilization.

Hey-ho, Hey-ho, Western Civ has got to go!

Expand full comment
Sep 6·edited Sep 6

I agree with you about this specific moment not being the most significant in Human History. However, I would argue that the development of nuclear weaponry and power is of such existential significance that there's a good possibility that we either:

1) Have already passed the most significant moment, perhaps by not blowing ourselves to pieces at some point in the Cold War OR

2) The most significant moment is somewhere in the future where we do destroy ourselves.

In other words, I'm arguing that we can disqualify anything before the Bomb as being the most significant and important moment in human history. The ability to completely, of our own volition, wipe out all complex life on this planet (and for all we know, the universe) seems like a pretty decisive break for us. It also comes convienently near the end of the 1830s-1970s era that you talked about.

What it says about our species that some large explosions are probably the most significant thing to happen thus far is a bit depressing, but there you are.

That said, you're right. This AI thing...it ain't it. Overhyped.

Expand full comment
founding

"The ability to completely, of our own volition, wipe out all complex life on this planet" doesn't exist.

Expand full comment
Sep 6·edited Sep 6

With our present warhead stockpiles, no, but we have the ability to make enough warheads of enough power and dirty enough to do it.

Expand full comment

Having looked at a lot of geological maps. The areas we humans live, mostly in river valleys and such. The rock on the geological map is colored yellow and orange. That means alluvial and colluvial rock ... which means rock in motion. I see someone plowing a road through mounds of rock and I think 'that will never be the same again. But then I consider, those mounds were emplaced by moving water, moving water that we struggle to control. Those mounds are only there because we have successfully controlled that moving water for the past 100 years. With just a few decades of neglect, that whole area will be re-landscaped by the river.

We know from Hiroshima, Nagasaki, and Chernobyl that dirty isn't really a problem. Uranium and Plutonium are very mobile in water. As a matter of fact, we mine uranium from ancient marine estuary deposits. It was dissolved by fresh water and deposited in marine estuaries. Its called Uranium Roll-Front deposits; which are sandbars in ancient river deltas where fresh water met sea water.

Expand full comment

What does that have to do with anything? With enough ground bursts we can disrupt the sun’s energy coming into earth enough for long enough to basically kill the ecosystem. Contaminated fallout as the soot came down could finish the job. The broader point is that we have the ability to so severely effect the planet that only bacteria, algae, etc. could survive.

Expand full comment

The other one people forget is that fallout from direct strikes on nuclear power plants and waste repositories would be pretty devastating. The fission products from a warhead might decay quick but several tonnes of vaporised high level waste will not.

Expand full comment

"We know from Hiroshima, Nagasaki, and Chernobyl that dirty isn't really a problem."

I'm having some trouble accepting that claim. Can you elaborate?

"Uranium and Plutonium are very mobile in water."

Compared to what?

I realize that there's a lot of uranium dissolved in ocean water- like, 4.5 billion tons of it. But my impression is that it's because uranium is a relatively common element, not because it's inherently "very mobile." Most of the other heavy metallic elements are present in massive tonnage amounts in ocean water, too.

https://sciencenotes.org/abundance-of-elements-in-earths-oceans-periodic-table-and-list/

" Altogether, there are some 50 quadrillion tons (that is, 50 000 000 000 000 000 t) of minerals and metals dissolved in all the world’s seas and oceans. To take just uranium, it is estimated that the world’s oceans contain 4.5-billion tons of the energy metal..." https://www.miningweekly.com/article/over-40-minerals-and-metals-contained-in-seawater-their-extraction-likely-to-increase-in-the-future-2016-04-01/

There's only a miniscule amount of plutonium dissolved in ocean water, of course- because practically all of it is the result of human manufacture, and we haven't manufactured very much of it- around 2850 metric tons, and it's estimated that only 1% of that amount has escaped from containment. https://str.llnl.gov/past-issues/march-2021/tracking-plutonium-through-environment

Expand full comment

I'd make the opposite argument from similar premises. History is all turning points.

In all of recorded history, I could not name an uneventful century. Every decade changes the world forever. Why should ours be any different?

We wouldn't have the 20s without the 10s or the 10s without the 0s, and so on. Every layer is built upon the layer below it. No doubt a superintelligent historian could reveal millions of vast and far-ranging effects of things that seemed like minor details centuries ago.

Expand full comment

I think the "recorded" part here is important. The first 250,000 years of human existence seem pretty forgettable except for them being the basis for what follows...

Expand full comment

What are the odds that an enormous number of connected neurons encapsulated within a skull and surrounded by sinew and flesh could concoct an essay such as this one? Near zero, I would imagine.

Expand full comment

After the fifth century BC, it’s hard to come up with a more important century than our most recent.

Expand full comment

Harari is indeed a charlatan, but doesn't seem to realize that he's a fraud. For a slightly different perspective on Harari, click below. Apologies to the godless for lumping you all together...

https://open.substack.com/pub/brianhoward/p/godlessness-is-not-a-virtue-ebf?r=c50dd&utm_campaign=post&utm_medium=web

Expand full comment
founding

A fitting analysis, at par in every respect with its subject.

Expand full comment

If someone is wrong about things and doesn't know it, he, by definition, can't be a charlatan.

Also, that piece is terrible, and completely misunderstands what Harari means about 'fiction.' Weirdly so.

Expand full comment

When discussing “AI”, there is the meme version which has existed since Chat GPT, where AI has become a vague buzzword used more for branding than anything else. Then there is actual discourse among experts in the field which has been around for decades. There are people such as Yudkowsky who kind of attempt to be a bridge between them, but the two are definitely distinct. I think it’s really important to understand that the arguments being raised about the risks and potential benefits of AI have been going on for decades, long before AI was a trendy buzzword and something like ChatGPT was conceivably possible.

Expand full comment

Of note is the fact that the conversational and logical reasoning abilities of GPT4 were completely unimaginable to everyone 10 years ago. Everyone that is, except Yudkowsky and the Less Wrong community, who seem to have accurately predicted the cycles of improvement thus far. They might still be wrong about how powerful the technology ends up but it’s worth taking their future predictions seriously.

Expand full comment
Sep 6·edited Sep 6

I don't know if they've predicted the cycles of improvement accurately given that they (or at least Yudkowsky) were pretty pessimistic about Neural Nets. I'm basing this on like half-remembered LessWrong posts, but I _think_ the prediction was that you would get something like "rational agents with expanding capabilities" rather than "progressively less hallucinatory, forgetful processes"

which is important! because the former seems much more worrying from a "will quickly FOOM and take over the world"

Expand full comment

I’m not sure of the specifics tbh, but just because they thought one outcome was more likely, doesn’t mean they completely dismissed the other outcome. But I also think Yudkowsky gets way too much attention on this issue, and it’s very easy for the “pro AI” crowd to use him to craft a kind of straw man against Ai worries. Marc Andressan does this and it’s really insufferable. But if you completely ignore Yudkowsky and the entire Less Wrong/rationalist community and just stick to what the credentialed experts believe it’s quite worrying. I remember seeing survey results from AI researchers and the median probability of a “catastrophic outcome” for humanity was 5-10%. That means 50% of them have it higher! If you surveyed car experts on a new model and they gave it a 5-10% chance of the engine failing you would never get in that car. I know it’s not exactly the same, but we only have one chance to get it right.

Expand full comment

Exactly my point, and the less wrong community was just relaying the sentiment that existed within the niche academic research exploring this topic. AI researchers as a whole are pretty clear that we should take these risks seriously. It’s also important to realize that there are people in the field who acknowledge the risks, believe they’re legitimate, but don’t care. It’s hard to believe, but there also leading people in the field who don’t really care if they usher in the end of humanity, and they say this openly as well. They literally sound like Batman villains.

Expand full comment

I would point out that the mid-20th century got uncomfortably close to being the most important period of human existence.

But yes. We do seem to have hit a bit of a plateau since then. And if I were to pick the biggest thing to happen, tech-wise, in the last ten years, I'd choose the finalization of mRNA vaccines over AI.

Expand full comment

100 years out of 300 000 is .03%.

Expand full comment

Harari may be an ass, but the argument is empirical, not probabilistic. You admit this by pointing to the material changes before 1970 relative to the impact of the iPhone. Beyond fantasies of AI singularities, all the other proponents you list can make countless empirical arguments in their favor, ranging from the increasing planetary scale of human agency, the new forms of digital being that comes with confronting the universality of the internet, the seeming exhaustion of every modern movement (liberalism, post-modernism, materialism, globalism) to compensate for the death of God, the inexorable compulsion of technology to dominate more and more of reality, global homogenization and cultural drift, our increasing inability to procreate, etc. etc.

Expand full comment
author

And it's just a coincidence that you live now!

Expand full comment

Sure, fine, but it's a pointless argument that could be applied to every contemporary critic grappling with the unique challenges of their historical era. So what?

It can even be applied to this very article. Out of the millions of critics responding to Harari, you are the only one applying a coincidence argument. Why do you think your 1 in a million chance is correct? My god, the arrogance!

Pretty stupid, isn't it?

Expand full comment

It's an open-ended armchair thought exercise, similar to pondering the anthropic principle. There's no one final correct answer, as it were. Nothing to take seriously one way or the other(s), since there's no conclusion to be had, but some of us find it entertaining to ponder the implications of the Grand Schema.

Bill Bryson's book A Short History Of Nearly Everything makes for a fascinating literary companion for that meditation https://en.wikipedia.org/wiki/A_Short_History_of_Nearly_Everything

It is of course impossible to make a guess about "likelihood" in regard to the title page topic, which implies an attempt to measure probability. The notion that there are odds to be calculated is absurd. None of us has access to anything close to the full array of relevant data sets required to do that.

The important questions--the ones to take seriously--relate to the stakes of the game and how it's to be played in order to obtain the most beneficial outcome. Not idle guesswork about "the odds" of success, apotheosis, cataclysm, extinction, or this or that.

We humans of this planet are not in the disinterested position of scrutinizing some lab experiment from afar, as if viewing the multiplication of a yeast colony through a microscope.

This is it, for us- our life, our lives, this living natural world that provides the basis of sustenance for us all. Our existence on this planet is a relationship of commitment. Not an abstraction.

Expand full comment

Mostly agree about Harari, and yes it does appear that the “exponential” growth in human capacity is turning out to be a mere S curve from unlocking fossil fuel power. But I view it as silly to downplay the significance of a machine that can write B grade college essays given how far away we were from that just a decade ago. We have no clue where we are on the AI S-curve.

Expand full comment
founding
Sep 6·edited Sep 6

We have a pretty good idea: toward the second knee in the latest of a series of sigmoids which average out, by comparison with the claims of wild-eyed AI boosters, roughly flat.

The origin of artificial intelligence as a field of study is roughly contemporaneous with that of computer science as a whole. If you believe AI has no history beyond 2022, someone has been lying to you. (A lot of people are peddling that exact lie lately. Why do you think that is?)

Expand full comment

Do you think there’s a natural upper limit to how smart an AI can get? Because so far the evidence suggests it just scales logarithmically with size and training data, even without improvements in efficiency. Computational power is more or less unbounded, and in addition to a growing internet, every camera and microphone on every cell phone in the world is a (nearly unlimited) source of more data. Why would you assume capabilities will flatten out?

Expand full comment

Your answer is in your statement. AI is the mean of the TRAINING DATA. Except that AI can piece together things that humanity has overlooked, AI is only an extraction of human work.

I consider the scientific paper replication crisis to be the main limit of AI.

Many AI models were trained with Reddit as a data source. Consider the implications of that— R/BadWomansAnatomy influences AIs thinking on human anatomy and physiology.

Garbage-In Garbage-Out.

If we feed AI our corrupted scientific magick, we'll have Bad Magick.

Expand full comment

I think it’s important to separate intelligence from knowledge in these discussions. Yes, the AI only “knows” information from the internet or whatever else it’s trained on. Similarly, humans only know what they’ve been taught, have read, or have experienced themselves with their senses. But intelligence, ie the ability to spot (ever more complex) patterns in the data and use that in making predictions and decisions, is what doesn’t seem (so far) to have any natural upper limit.

Expand full comment

I disagree ... not that humans need a shoulders to stand upon, but I assert that humans do develop new knowledge. Ideally every scientific paper is a discovery of new knowledge.

If an AI were running a scientific experiment, and something failed; would the AI have an Ah-Ha moment—perhaps this is a new discovery. Or discard the whole experiment as a failure.

A friend has a cousin who ran a bunch of cold fusion experiments. Everybody brushes it off for bad press, and perhaps nothing interesting is really happening. But, the cousin reports that labware is being eroded/consumed in a reaction that is not explained. Something unexplained is taking the heavy water from 70°C to hot enough to melt glass (1,400°C).

Expand full comment

Again, you’re assuming that AI development stagnates, not proving it. So far each new GPT is MUCH smarter than the previous, primarily by scaling in size and training data. What makes you so sure that it can’t catch up with our human brains with our fleshy neurons? What is so magical about us? And if it catches up, why on earth would we expect it to stop improving afterwards?

Expand full comment

AI doesn't even have a sense of when it's on or off. It has no more self-willed motivation to accumulate knowledge than an a garden rake is motivated to rake leaves. Which makes sense, since neither tool has a self.

Granted, there are a lot more ways that operating AI can lead to trouble than is the case with operating a garden rake.

Expand full comment

I just found myself watching Noah Yuval Harari's appearance on an (overall dreadful) episode of The Daily Show, talking about AI.

Harari offered his opinion that AI is "not a tool, but an agent." I suppose that the workings of AI fit the definition of "agency", as defined by the American Heritage Dictionary:

1. The condition of being in action; operation.

2. The means or mode of acting; instrumentality.

AI also fits the AH Dictionary definition of "agent"- but only the second and third definition:

2. One empowered to act for or represent another.

"an author's agent; an insurance agent."

3. A means by which something is done or caused; an instrument.

But AI doesn't fit the first, primary definition:

1. One that acts or has the power or authority to act [of its own volition, in the sense of "individual human agency." ed.]

AI requires initial marching orders from outside direction. Autonomy is absent, and all of the initial motivation must be supplied by an external human intelligence. Without that impetus, the AI agent is just in limbo, awaiting a client. It has no interest in self-representation.

AI harbors no private agenda, because it has no need for one. Not only does AI not require a personal agenda, I challenge anyone to make a case for an AI program ever wanting its own self-willed agenda that doesn't come off as a human fantasy projection ascribing human traits to a phantasm of electrical circuitry induced by an external electrical power supply.

Expand full comment
founding

I don't assume capabilities will flatten out. I'm telling you outright they have done so.

We see incremental improvements and refinements in efficiency. We see OpenAI license content, and begin selling ad placements in ChatGPT results. We do not see models released with striking new capabilities, and we would if they existed to release. Far too much money needs that to happen for any vestigial concerns about safety to intervene, and the safetyists haven't really been involved for about a year in any case.

As I said before, the history of development in artificial intelligence is that of a series of step functions. It is also the history of a series of "winters," brought about in every case as a consequence of vastly overheated claims made by boosters and proponents of whatever technology is the latest hot new thing, followed by lasting disillusionment when the new thing turns out still not to unlock true AGI.

These are useful technologies! A lot of the theory behind modern web search came out of this field. So did a lot of natural language processing, some of the best programming languages known, and computer chess has been an AI preoccupation since the 60s. I don't say the field produces nothing of value, but it persists in failing to live up to its own public claims, at the cost of losing public interest and funding when the inevitable disillusionment finally arrives. The first time that happened was in the 60s, too.

That disillusionment isn't fully here quite yet in this iteration, but it is coming. LLMs made a big early splash, but it's long since become evident they are not "smart," and their limitations are at this time fairly apparent. They cannot reason and do not model the world, only language, which they have no way to test against reality. They can be adapted to model new information but they cannot learn. And, contra the reheated singularitarianism of covert partisans like Aschenbrenner, they cannot self-improve. They aren't even very usable in a "raw" state; you can talk with one, but to make it really useful with reference to real-world data, much additional engineering effort is required. Many more people at OpenAI are doing that engineering than doing foundational research. If you're using OpenAI's products as a baseline for what "AI" can do, you are for this reason therefore necessarily overestimating.

That's all fine. As I mentioned, we saw a lot of applications out of prior AI hype waves, and we will this time too. It isn't that the software cannot be made useful. It is that the software is not God.

Expand full comment
founding

And we also see, in yesterday's announcement of OpenAI's "o1" preview, a lot of automation engineering around chain-of-thought, itself not a novel concept, such that the model automatically goes through steps a human would otherwise prompt. The cost doesn't decrease because the capability is also not novel; you just spend more to run the model longer and in a more automated way, and you may not see most of the tokens it actually produces.

If it lives up to the claims, it's a sizable increment, but still an increment. How confident really are you that God can be asymptotically approximated?

Expand full comment

That chances that within the next 50 years AI and Fusion pay off in an astounding way that people will look back on? 20%. It’s certainly plausible especially if AI’s ability to predict imminent fluctuations in the magnetic containment field is super important to getting fusion to work.

Expand full comment

Why not 85%? You can make up any number you’d like, so long as it’s between zero and 100.

Expand full comment

Sure, it could be 85%. What’s your point?

Expand full comment

100%!!! The future's so bright I gotta wear shades, LOL.

Expand full comment

"That chances that within the next 50 years AI and Fusion pay off in an astounding way that people will look back on? 20%"

cite

Expand full comment

And practical fusion would be way bigger than fossil fuels and is there a 50:50 chance it could happen within the next 50 years? Sure.

Expand full comment

Yeah, especially since the physicists working on fusion have been predicting that it will become a reality in the next fifteen years... since like 1960. If you believe that this time is different then I have a starship that I'd like to sell you.

Expand full comment

So you’re saying the physics doesn’t allow it? Or it will be forever out of humanity’s reach for some other reason. Walk me through why you think the engineering problems will be forever insurmountable.

Expand full comment

Er, no. The burden is on you to explain how fusion is feasible as an energy solution. So far we've been able to achieve a brief flash of fusion for a fraction of a second at enormous expense. The burden is on the wild-eyed believers, my friend.

Expand full comment
Sep 6·edited Sep 6

You first. You seem so very sure of the details, who don’t enlighten us all.

Expand full comment
Sep 6·edited Sep 6

I've made my case: wishful promises of success in the near future for 60+ years and vast resources and energy pissed away with no return on investment during that period. Your turn. Please explain the basis of your faith, true believer.

Expand full comment

The Tsar Bomba.

Expand full comment

Since you seem very confident we won't have fusion power in fifteen years, I'd be happy to bet against that claim. I think there's at least a 10% chance that someone sells at least 1 GW*year of fusion energy before 2040.

Expand full comment

I recently considered the technological innovations and scientific discoveries that occurred between the time my grandfather was born and the time I was born. They include mains electricity, radio, the gramophone, the telephone, cinema, the automobile, aircraft, television, nuclear energy and weapons, computers, Fordist mass production, space flight, satellites, X-rays, the theory of relativity, quantum physics, the confirmation of DNA's role in heredity, to name some of the most important ones. What we have invented and discovered since I was born (1959) needs to be compared with that list.

Expand full comment

I have thought similarly, my grandfather being 72 when I was born. But your grandfather was at least 78 when you were born, to have been alive when mains power was first distributed in Godalming in 1881. That's quite impressive.

Expand full comment

I agree with this for many general phenomena relating to human culture, progress, historical developments etc. But the climate (and larger ecological crisis) is a glaring exception. I mean, just look at the data in an article like this one (not to mention the more technical reports): https://www.washingtonpost.com/climate-environment/2024/09/05/hottest-summer-record-heatwave-global-temperature/

There is simply no way to deny that this particular physical limit we're bumping up against is unprecedented and unique, not just in terms of scale but because it is human-caused. That doesn't mean human society will go extinct, or all the extreme horror scenarios will necessarily play out as described. But it's categorically different, a true inflection point.

Expand full comment

The historic limit we bumped up against was famine as the population expanded to consume all available food sources.

Expand full comment

Great point. If anything, people will look back at our time with disgust as a period of unbelievable extravagance, willful ignorance and wild irresponsibility. Congratulations everyone.

Expand full comment

I think they'll look back and mythologize our accomplishments to the the point of pure fantasy.

There's a possibly apocryphal story I remember seeing about some activist group or another that visited those Asian factories where all our cheap crap is made – you know, the places that have bunks for the employees and pay them 70 cents a day – and showed them what Americans were doing with all the stuff they bullt. The intention was to radicalize them against their exploitation, but instead they thought the American consumerism was cool and enviable.

Even if we do destroy the climate, I believe our era will be remembered as one of wonders, with a tinge of envy that we got to live in a time so full of magic. They'll tell stories about grocery stores and theme parks, and tell tall tales about how we attacked the moon in a great war with Russia.

Expand full comment

Consider Monte Testaccio in Rome. A hill made almost exclusively of pottery shards. Wine and Oil imported by sea arrived in amphoras. When offloaded the fluids were transferred to smaller containers and distributed thus. The amphoras were discarded in a waste area. Its 115 feet high. Interesting wiki read.

Expand full comment

When you dive into the data, the view changes drastically.

First, the hottest decade was the 1930s. When we see another dust-bowl, we'll know where in the hottest decade.

Go prowl around Watts Up With That, and you'll discover that we've reached a new tipping point. Yes, fully half of the temperature record is filled with estimated data. Several waves of admins 'correcting' the past data, with every correction finding the past was cooler than what was previously recorded. You may find that likely, I'm not so sure.

When you dive into the data, consider for any place, the daily temp is the average of the daily high and the overnight low. OK, now plot the average daily high for some long period of time, it seems pretty darn flat. Now plot the overnight low, and you'll see this has a rising slope. But the average is the average of the high/low, so the average climbs ... whilst the daily high doesn't move much at all. Then you throw in the Urban Heat Island Effect, because places like the grassy aerodrome that collected data 100 years ago is now Chicago-O' Hare, how's a few million tons of concrete and asphalt going to affect the temperature data? —All that mass of rock is going to collect the heat all day long, and release it all night long. And that's true for almost all temperature monitoring stations, they started in rural agricultural settings that are now urban heat islands. The overnight lows are increasing faster than the daily highs. And all of the data is skewed by the urban heat island effect.

Consider the highest temperature ever recorded 123F was in 1909 at a Death Valley alfalfa farm. High temperatures are not the limit to agricultural production. The limit to agriculture is cold temperatures. Humans evolved in the tropics, we are tropical species, we can survive the heat. All around the world, cold weather kills 10x more people than hot.

If you think food production is in danger, go look at Our World In Data and—excepting the war in Ukraine—see nothing but improving yields all around the world.

Go look at the IPCC's charter. Their charter states they only consider human influences on climate. Yes humanity has some effect on climate. Is it 10%, 50%, 90% ... can't tell, and the IPCC can't tell us, because the non-human effect is opaque to them, they are limited to looking through the lens that humanity is 100% the cause. And it doesn't hurt that there is a lot of big money installing alternative energy based upon the findings of the IPCC. If you don't think climate science is corrupt, that's because you didn't read the climate gate files, they're pretty damning, and part of the crisis of papers which don't replicate. Many climate papers were published with 'private data' which was later lost—my dog ate my evidence.

Expand full comment

I'm not as well-versed in the nuances of the data and politics as you are, and it's always interesting to consider overlooked factors. Obviously there is big money and lobbying interests on both sides (definitely not only climate science where you find corruption!) I wasn't familiar with Watts Up With That - Wikipedia describes it as the "largest climate denial site" - I do know Mike Hulme's work and respect his scrutinizing of knee-jerk alarmism. But it's important to look at the big picture and consensus as well, and sometimes you gotta go with Occam's Razor. The evidence base for a qualitative shift in climate over the last 100 years, strength of the theoretical rationale, and degree of agreement among scientists is about as huge as I can imagine. What is the likelihood that non-human factors on a geologic timescale would account for such rapid scaling-up of global temps in just 50 years, without us even noticing the source? Or that 99% of all climate scientists have been brainwashed?

I don't have the chops or background to go toe-to-toe with you over these caveats, but it's fun to nerd out about it, so just a couple thoughts. In my understanding, the 1930s had the worst *heat waves* in *the U.S* only; not the highest global temps overall (which is a better indicator for climate change than one locality). Heat island: great observation, but all that infrastructure was in place by 1970 or earlier and it's since then that the real acceleration has occurred (and do you really think climate scientists haven't taken this benchmark into account, in all their computer modeling?). Agriculture: irrelevant to the pace of human-caused climate change, but whether or not heat is technically "the limiting factor" tells you nothing about predicted outcomes for civilization along virtually any parameter. Daily high-nighttime low gradient: who cares which is rising faster, if it still makes heat waves worse and all those other climate effects are still occurring?

Expand full comment

On Watts, Watts is/was a weather forecaster, who thought he didn't see the climate change being claimed. So he built an army of volunteers to examine every US weather station and rate it for site setting etc. He developed Watts Up With That. They have their culture, and sometimes its pretty bad ... people are people, go figure. But they also have some very good science. Jim Steele often presents there. He ran the UCSF field station and presents a lot on weather physics that is really good. Jim shows that one of the least disturbed stations in Truckee California shows temperatures cooling not warming. Watts was the first to discover that large temperature data sets were being edited to show a cooler past. One of the tells you can find, is look at 'number of days over X temp' in some city. That doesn't get recalculated, or mostly doesn't get recalculated. When you find there were 8 days over 100 this year, and 13 days over 100 in the 70s, and 15 days over 100 in the 60s ... something is rotten in Denmark fella. This is the stuff WUWT points out. Also stuff like 2024 is the worst hurricane year on record ... um no, there have been no named hurricanes striking the US this year ... granted the season is not over.

No, 99% of all climate scientists have not been brainwashed ... but 100% of the climate scientists are pressured to find global warming is catastrophic and will come to get you. Or they will be ostracized, lose funding, maybe get fired, definately their career is over. Ask Dr. Judith Curry who often presents on WUWT. She tried to retract one of her own papers because the addition of a new decade showed global warming was mostly flat ... and that's not allowed.

But don't believe me, go take a taste of the red pill.

Expand full comment

Interesting - thanks for elaborating. I have too many red pills on my dresser already, what's one more?

Expand full comment

I too am recently on the RED PILL diet.

Expand full comment