Julia Galef's The Scout Mindset is not for me, in ways both big and small. To start with, it should be called just Scout Mindset, not The Scout Mindset. No, I will not be justifying that statement with an argument. Beyond that injustice, it's an engaging precis on some important topics by a thoughtful author, and a book that was clearly a labor of love. And at times I couldn't stand reading it.
Galef’s book, her first, is part of a burgeoning genre in how to think more rationally. The text, squarely designed for a popular audience, is a primer on how to make better choices and think more clearly despite the fact that we as a species have a remarkable number of ways to fail at both. As a string of bestsellers on neuroscience and psychology have argued in the past 20 years or so, we are a self-deluding species, and the ways that we lie to ourselves cause us unnecessary hardship. The trick is whether learning about these cognitive biases can really help free us from them. Galef is convinced that we can think better, if we want to, and presents a set of thought experiments and tools to help the reader in this regard. Embracing such tools helps one to think like a scout - not all the time, but perhaps when it counts.
The titular “Scout Mindset” exists in contrast to “Soldier Mindset.” Someone who thinks like a scout is an explorer, willing to truly scout out the terrain and see the world for what it is. (Mostly, it turns out, through applying concepts from elementary probability.) The soldier, in contrast, is stuck in defensive thinking, determined never to cede territory, intent on defending what they believe to be true in the face of threats, even when it would be more to their advantage to let old beliefs go. If those sound like somewhat awkward analogs, I’m with you, but they’re also a useful enough acrostic for different ways of thinking. Those fundamental terms do disappear from the text for a strangely long period, though, given the title. There are times that I felt that the central metaphor was perhaps pushed onto Galef by her publishing company; they love digestible metaphors, and creating a good guy/bad guy dichotomy never hurts sales, and as I said Galef develops her metaphor and swiftly sets it aside. But that’s speculation, and not very responsible speculation.
Once that table setting is dispensed with, the meat of the book is a variety of thought experiments and mind games, many of them genuinely fun. Much of the text is devoted to laying out those basic elements of probability I mentioned and some proto-game theory, utilizing real-world scenarios that sketch out how bad thinking can lead to bad outcomes. These dips into probabilistic thinking and optimizing decisions are all well-drawn and refreshingly clear. Thanks to her patience and talent for aphoristic thinking, Galef's writing is a model of measured clarity, and the text functions in many ways as a book-length invitation to learn more about thinking. I’m very happy to say upfront that the book is resolutely competent, never falling flat on its face or risking embarrassment by getting out in front of its skis. Whether this is an entirely salutary condition for a book is a question I will leave to others.
One thing that struck me was what the book is not, at least not explicitly: a part of the “rationalist” movement, the loose constellation of bloggers and thinkers that coalesced around the blogs LessWrong and Overcoming Bias, and which is now probably exemplified by Scott Alexander’s Astral Codex Ten. Several versions of “rational/rationality” appear in the index, but unless I missed something these do not refer to the rationalist movement. Alexander appears twice in the text, but is only referred to as “psychiatrist” and “blogger” rather than as an emissary from the rationalist worldview. (Amusingly Galef approvingly cites Alexander for changing his mind to a pro opinion on the efficacy of pre-K; in other words, for changing his mind from right to wrong.) Tellingly, Galef nominates as a healthy intellectual community not the broader rationalism movement but ChangeAView (now CeaseFire), which is merely rationalism-adjacent and which she again does not locate within that context.
Also conspicuous in its absence from the index is “Yudkowsky, Eliezer,” the man considered the originator of the modern rationalist movement by broad affirmation and, in my opinion, not an ideal representative for a movement looking to popularize its ideas. I say that as someone who thinks that the rationalist movement gets many things right and is an overall positive development for our intellectual culture. The trouble is that Yudkowsky is frequently emblematic of a kind of insularity and intellectual arrogance that I associate with that culture as a whole, and this cuts deeply against Galef’s project, which is so clearly designed to welcome newcomers into the fold (of more rational thinking generally, that is). Perhaps I’m reading too much into what’s not there, but it certainly seems that Galef is taking steps not to be associated with that crew. Whether the distancing from the rationalist movement is intended or not, The Scout Mindset seems like a great delivery vehicle for those ideas, presenting the best elements of the tradition without any of the smarter-than-thou baggage.
So what's my complaint? I find the lessons clear and the advice well-taken, and as someone who was already fond of Galef’s work, the book’s content only increased my admiration. She’s the right messenger for a good message at an opportune time. It's the execution of all this that I find imperfect - sometimes it’s just a little odd, sometimes exasperating.
Consider the brief section (little more than a page) titled “Reasoning as Defensive Combat.” It’s in this passage that Galef establishes the Soldier Mindset concept. Galef thinks that most people have a martial orientation towards reasoning, hence Soldier Mindset. To illustrate this, she sets up a associative construct that will probably be familiar to readers of nonfiction, seeking to demonstrate that we use martial terms (that is, terms of war and combat) when talking about reasoning, which ties in nicely with her Soldier Mindset bit. She then proceeds to… not do that. Really, not at all. Observe.
We talk about our beliefs as if they’re military positions, or even fortresses, built to resist attack. Beliefs can be deep-rooted, well-grounded, built on fact, and backed up by arguments. They rest on solid foundations. We might hold a firm conviction or a strong opinion, be secure in our beliefs or have an unshakeable faith in something.
I would hope this would be obvious - none of these are military metaphors, and there are no martial terms here. I confess I find the contrast between the first sentence and the examples quite baffling. These are indeed terms that are often used in a military context, but they’re used as metaphors in the military space, rather than being military terms that are used metaphorically in other contexts. The metaphorical arrow points in the opposite direction than what Galef seems to think, so to speak. If, indeed, they’re metaphors at all. “Solid foundations” is metaphorical language (but has nothing to do with combat or soldiers), so is “deep-rooted” (ditto). “Backed up” is sort of metaphorical in a vestigial way. “Built on fact” is a bridge too far, for me; yes, you build houses or bridges, but you also just build stuff intellectually. And, again, not a shred of martial language involved. “Firm,” “unshakeable,” “strong,” and “secure” are just words, and none of them are military terms. So what is the relationship between the first quoted sentence and the rest?
She continues, however!
Arguments are either forms of attack or forms of defense. If we’re not careful, someone might poke holes in our logic or shoot down our ideas. We might encounter a knock-down argument against something we believe. Our positions might get challenged, destroyed, undermined, or weakened.
Here we’re on better footing. Not great footing, but better. “Shoot down” qualifies as a martial metaphor. “Knock-down” I'll grant. “Poke holes” is a mighty stretch but one I'm willing to make in the spirit of charity. But challenged, destroyed, undermined, and weakened are all terms that are so general and context-dependent that it's just hard to see what we're accomplishing here. I'm afraid it gets worse.
And if we do change our minds? That’s surrender. If a fact is inescapable, we might admit, grant, or allow it, as if we’re letting it inside our walls. If we realize our position is indefensible, we might abandon it, give it up, or concede a point, as if we’re ceding ground in a battle.
Here I just have to say… what on earth? Galef is clearly intelligent and a strong writer, and I have to imagine that Penguin employs excellent editors. And yet here I have terms like “admit,” “grant,” and “allow” proffered as combat metaphors. “Abandon” is a martial term? Really? And if you think I’m being uncharitable, I will refer you again to the name of the section, “Reasoning as Defensive Combat,” and ask you to consider that in a footnote, she explicitly says these are words with “connection to the defensive combat metaphor.” To which I say, what metaphor? You have utterly failed to establish such a metaphor.
There's a temptation for all writers, to get too attached to a metaphor. This is hard in the best of times - it's certainly hard for me - but it becomes more complex within an editing process, as you have to fight to keep what you want even as the context in which it initially made sense gets altered. Tricky thing. But this is a professionally published book and I bought my copy, so I ask for a certain level of coherence to analysis. It's a small issue, obviously, this weird failed written construct. But I harp on it because there's a strange sense throughout that the text itself is unfinished, as opposed to its argument. As I said the propositional content here is always credible and well-presented. But the book book, the form rather than the substance, the vehicle through which the arguments are delivered, is not, and it feels like a shame.
Let's take another petty example before we get to the big issue.
Here I must apologize, as Scout Mindset's failing in this regard is a fairly common one. I believe it should be added to the penal code, Unnecessary Use of a Venn Diagram. Please, tell me: what on earth is gained by using such a diagram here? Is the reader really going to be confused by the concept of a set that includes a subset which has a characteristic that those outside of that subset don’t share? I don’t want to pick on Galef here, as I feel like I see Venn diagrams for no reason frequently now. Venn diagrams are most useful when there’s overlaps between multiple circles which create more spaces than circles (and thus sets). They speed quick visualizations of inclusion and exclusion when such understanding might otherwise be challenging. Here, we might as well just have two circles that never overlap labeled “Coping Strategies That Require Self-Deception” and “Coping Strategies that Don’t Require Self-Deception.” That’s also a valid Venn diagram and one which adds about as much interpretive content as this one. Even better, couldn't we just have a two-column list of strategies that do and don't require self-deception? What's that? I’m spending way too much time on this? OK sorry.
The point is… I think this book was in need of an editing team with a firmer hand that was more interested in engaging in editing as an adversarial process. And the irony is that such a process would have been perfectly in keeping with the kind of rigor that The Scout Mindset seeks to inspire.
Which gets to the big problem, for me, and why I say it's not for me rather than saying it's not a good book: tone. I use this term with considerable misgivings. For ten years I scolded college freshman for using it in their papers, as it can so often function as a vague substitute for the precise feelings that I was trying to get them to articulate clearly. But here it’s the best word I can think of. The Scout Mindset's tone is that of a patient sixth-grade geography teacher, trying to guide her young charges calmly and gently and landing on an attitude that's perhaps 5% too chipper and 10% too condescending. It happens that I am not a sixth grader and I like being condescended to even less than I like it when people are chipper. All argumentative non-fiction is to some extent didactic; it's a question of degree. For me, the calibration was off.
And so I can’t think of a better overarching judgment here than “not for me.” Not for me because I'm not someone who wants to be talked to that way, not for me because I am not the perfect beginner this book seems to imagine as its audience, not for me because I like to be challenged by authors much more than I like to be gently shepherded by them. But - and this is a big but - many people are not like me. Many people on the internet are looking exactly for someone to kindly and patiently guide them from ignorance, and I don't mistake that as an unworthy goal. Quite the opposite, as a matter of fact. There are likely more of those kinds of people than there are of people like me. But all I can do is review from my own perspective, and for me, though the book is fast-reading and frequently entertaining I found the nearly-300 pages difficult to get through. Galef is just too resolutely enthusiastic for this cynical soul.
But the fun moments are fun. My personal favorite is when she watches Star Trek movies and episodes and tracks Spock's certainty relative to his actual predictive outcomes. (He sucks at predictions, for the record.) She does this in service to explaining the concept of calibrating one’s one ability to make predictions at a given confidence level, and it's a lovely illustration of useful concepts. I also quite liked a brief but sharp consideration of the ever-bubbling frequentist vs. Bayesian divide, a genuinely complicated topic that she handles with poise. Galef has the goods here, in general. The trouble for me is, one, that I already know much of this stuff. (But I’m an obsessive weirdo who took a dozen stats and methods courses in grad school.) For another, I don’t like feeling like I’m being led gently to knowledge by a benevolent teacher but prefer to be throw into the deep end. (But many people are not like me.) So it’s tough to judge. There are some stylistic elements I’m happy to straightforwardly say should have been fixed in editing. (“A thought experiment is a peek into the counterfactual world.” Uh, yeah.) But I can’t remember a book in recent memory that I was happier simply saying, it’s not for me. The Scout Mindset is a good book. It’s just not for me.
The larger question with The Scout Mindset, though, is the one that haunts the entire rationalism movement: is it really the case that we can think our way out of bad thinking? Or are the Hindu sages correct in believing that the rational mind itself is the trap, itself is maya, from which one can only be liberated by letting go? I appreciate Galef’s set of constructive choices that one can make to be more rational at the end of the book. But we live in a world where many millions of people genuinely believe that (inaccurate) estimates of where various heavenly bodies were in relation to each other at the time of their birth influences the events of their life. Against such irrationalism, thought experiments seem like profoundly impotent tools. Like I said before, books on irrationalism and how to avoid it have become a cottage industry, and yet I can’t say I’ve observed any corresponding growth in ambient rationalism. And even the best of us fall into irrationalism. Just the other day I threw a coin in a wishing well myself. But then, Galef admits upfront that we’ll frequently fail to be rational even with all of the tools at our disposal, which is a mature and welcome qualification. The only trouble is that while such admissions may be a bit of sober wisdom, they can also make the whole genre seem like a bait and switch.
The best thing I can say for The Scout Mindset is that, at its most confident and charming points, I almost believe that we really can slay our irrational demons and engage with the world from the standpoint of greater objectivity, that we can achieve genuine reason, if only for awhile. I almost believe that. But not quite.
After all. Even Spock was half-human.