Why I’m not a rationalist (clean version)
"To be fair, you need a very high IQ to understand Eliezer. He is extremely subtle, and without a solid grasp of theoretical physics most of the concept handles will go over a persons head."
I recently did a podcast with Ancient Problemz.
Not only is he an amazing host, he's an incredible editor. The original podcast was over four hours long, as we spent the initial 30 minutes doing our bro bonding ritual, and at one point we got caught up in a 75 minute debate on the sexual market value of the green M&M.
But by and large our discussion was about…
The rationalist community
I assume most people on this website have some idea of what the rationalist community is; for those who don’t, you can think of them as a group of people who are really fond of Bayes theorem, AI, and invoking the name of a Carthaginian God of child sacrifice. Your typical stuff.
If you compare the rationalist community to the median online community, they come out looking decently good. Yet, in a certain sense, they mirror the red pill community.
They expose your mind to certain ideas, give you some genuine insights, as well as a like minded group of people who think about the world the same way you do — but ultimately it’s something that most people should dabble with before moving on.
To be clear, I’m not saying that one should completely distance themselves from the thoughts and ideas of rationalism; rather, something strange happens when you make rationalism a central part of your identity.
In this post I’ll go through some of my general observations — and in part 2 I’ll get more specific with experiences and people.
To preemptively address the “No True Scotsman” argument, I indeed recognize that there is a certain degree of heterogeneous thought within the community. Nevertheless, it's pretty obvious that rationalism tends to attract a certain type of personality — starting with the fact that the majority of them seem to be…
Intellectual Foodies
Personally, I find foodies to be quite irritating, and very boring. It’s one thing to enjoy food, and it’s another thing to make the enjoyment of food a core personality trait.
Many rationalists are foodies for the mind. Some of them are smart, but surrounding that orbit are a ring of people who seem to have gathered nothing more than trivia knowledge.
Tyler Cowen is not technically part of the rationalist community, but he’s definitely adjacent to it, and I think he perfectly embodies the “knowledge foodie” archetype. For some reason, people assign him this unwarranted level of intelligence, because he has this manic ability to switch back-and-forth between a bunch of unrelated subjects when in conversation with others.
This is one of the things that I discussed with Problemz during our podcast. He made the hilarious connection to Die Hard 3 — a movie where Bruce Willis has to save New York by solving a bunch of riddles in between action scenes.

The movie treats the world as a sort of trivia game, constantly giving you little tidbits that you’re going to use later. And, in a strange way, that’s how a rationalist views the world.
That being said, there are several things missing from their worldview entirely
Specifically, exercise.
Now I don’t want to be mean (well, not in this post at least) — but as I continued to interact with the rationalist community, I found that many of them have a worldview that is quite, shall we say, disembodied.
I’ve noticed that certain domains of life that are entirely missing — things like exercise, sports, and to a lesser extent, traditional dating and relationships.
The common thing about these domains is that you can’t really build a sophisticated model of how they work. You can read all the dating and fitness books, and go to seminars, and learn all of the theory — but it’s another thing entirely to apply that knowledge.
And it shows; when you look at some of the prominent rationalists, there’s a commonality in, ah, how they present themselves.
To the extent that they do think about maintaining their body or improving themselves physically, it’s almost always through a pharmacological lens. Things like medication, or psychedelics, or Ozempic, or that bacteria thing that was supposed to make brushing your teeth obsolete?
The point is that there are domains that can’t be readily quantified, and as such they tend to be systematically discarded in the rationalist paradigm.
On the other side of that coin, however, are the domains where theoretical models thrive, as they are comfortably detached from the constraints of the real world. As such, rationalist bloggers tend to be kissing cousins with…
Econ bloggers
For the same reason rationalists don’t enjoy dating apps or hill prints, they love a certain category of economics blogger.
This is because many of them have “spreadsheet brain” on steroids; the idea that anything can be solved if we simply do the proper feature extraction, data cleansing, and gradient boosting. It’s the idea that, with enough python frameworks, tensor operations, and first principles thinking, you can find the optimal solution for everything — right from the comfort of your own desk.
I always found it strange that there was such an adjacency between the rationalist community and economics bloggers, especially the dorks from George Mason University. But it starts to make sense when you realize how heavily concentrated they are in certain parts of the country, and in certain industries.
Tech and economics are somewhat intertwined in their paradigms. They’re both wildly overconfident in how they can venture into other fields and solve problems that have been around for thousands of years.
Moreover, the underlying value system for rationalists, economists, and tech people is that of…
Utilitarianism
I confess, for the longest time utilitarianism was the only sort of belief system that made any sort of sense to me – though, ironically, the more I became exposed to the rationalist (and EA) community, the less utilitarian I became.
First there were the asinine attempts at trying to quantify certain values and properties of conscious existence; for example, attempting to figure out how many bugs are worth a human life, or how many shrimps saved is worth a bed net in Africa. Ultimately the exercises were trolley problems on crack, and there are so many different implicit assumptions that the actual implementation became unworkable.
Additionally, these sort of utilitarian calculations were bad at modelling the worldview of people outside of their community. For example, many of the tech people in the community simply have a hard time understanding why a general member of the public doesn't believe that the society's problems can be solved by an improved user interface and better reinforcement learning algorithms.
There is a god, and his name is trade off — and for a utilitarian, it's quite easy to make the trade-offs if you simply do not account for all of the costs.
Much like economists, they try to hand-wave away this failure mode, saying that things will improve “on average” and that everything will work out “in the long run.” I'm sure the people in Auschwitz were incredibly happy to hear how peaceful the 1900s were on average, and how it improved in the long run.
Beyond this, many of the constructions of rationalist arguments are overly reliant on…
Hypotheticals
I think Scott Alexander’s essay on drowning children is the best example in recent memory where hypothetical navel gazing goes too far. In that essay, he essentially attempts to defend the idea of spending money in away places, using a series of increasingly ludicrous scenarios.
Many hypotheticals are like this: trolley problems, drowning children, repugnant conclusions, etc. After a certain point, a lot of the pontifications become so far removed from any practical considerations that any lessons learned become meaningless for the real world.
That being said, Scott is one of the most consistently high quality thinkers in the community; the bigger problem is his multitude of orbiters who attempt to sound really smarty by using…
Unnecessarily Sophisticated Language
In the fitness world, one of the ways that grifters try to sell you on their products is to use complicated terminology. They’ll tell you how a certain workout protocol “activates both sarcoplasmic and myofibrillar hypertrophy using a daily undulating periodization rep scheme”.
When they try to sell you supplements, they’ll hide the ingredients under a “proprietary blend.” This allows the person selling the supplements to hide the exact ratios of the ingredients. In practice, this means they can get away with adding a bunch of fillers in the supplement that have nothing to do with the active ingredient.
Similarly, many rationalists have a fondness for proprietary language, if only to hide the sheer amount of filler. In many cases, rationalist blogging is nothing more than using sophisticated language to make a point that is either intuitive or mundane.
Many of them seem to be constitutionally incapable of saying things in a straightforward manner.
It’s not picking your battles, it’s Kolmogorov complicity.
It’s not an arbitrary difference, it’s a Schelling point.
You’re not stuck between a rock and a hard place, you’re in an inadequate equilibrium under a Molochian system.
You’re not challenging your assumptions, you're updating your priors.
You're not copying anyone, you're under a Gerardian mimetic paradigm
It’s not reading the vibes, it’s a Straussian analysis.
Everything is like this with the rationalists. Everything has its own acronym, its own “concept handle”. For some reason they’re completely averse to borrowing the concepts from other domains and fields, especially the humanities; rather, they need to…
Reinvent the wheel
There’s this hilarious video where a doctor wrote a paper, naming a particular mathematical technique after herself, calling it “Tai’s model”.
What’s funny is that the mathematical technique she coined is called the trapezoid rule, which has been a very common part of mathematics for literally thousands of years.
It would be like if I invented the “Ghostwind rule” — only for it to be long division.
Tai’s model was a scenario where a person with a non-STEM background ended up accidentally rediscovering something that is quite well known in STEM. But with the rationalist community, it works in the opposite direction.
For example, Tyler Cowen helped start a new branch of studies called “progress studies” – which is supposedly meant to gain an appreciation of how society changes over time. Now if that sounds familiar to you, that's because it’s that’s the entire fucking point of history.
Another amusing example is this short essay from Scott Alexander, where he tries to explain his sophisticated worldview when it comes to morality and in-group dynamics. Ultimately he ends up reinventing virtue ethics, which has literally been around since Aristotle.
My favorite instantiation of this, however, is the fact that there’s always some person in the rationalist community trying to reinvent dating apps, only for it to fail spectacularly for the same reason that every other dating app has failed so far.
And in recent times, many rationalists have to contend with…
The Great Irony
Many rationalists are doomers when it comes to future of AI agents. Scott Alexander himself has written a piece called “AI 2027”. His most likely scenario in that post is that nanobots kill everyone by the year 2030.
In many ways, these AI agents are the ultimate rationalists. The entire purpose of these neural networks is to take in data, iteratively update their parameters as time goes on, and come up with a general model of the world in many different domains.
For some reason, many of them recognize that AI can still have major blind spots if it’s misaligned with human values — and yet, in my experience, a significant portion of the community doesn’t sufficiently criticize themselves for having the same basic flaw in reasoning.
This is one of the ironies of rationalism. When you're smart enough, you get really good at rationalizing things – which is the exact opposite of rational.
But of course, maybe I’m not qualified to talk about the rationalist community
The average IQ of a rationalist is, after all, no less than 130.
As low human capital, perhaps my lack of PhD disqualifies me from having any sort of opinion on the overall community.
In which case, perhaps it’s necessary for me to drill down and focus on individual members — something I’ll be doing in part two.
For any 'movement', there tends to be a core of cool, smart people that form a 'community'. Then, over time, due to market forces, network effects, and evaporative cooling, the people who are left are not as interesting, too ideologically fixed, more neurotic than average—and, yes, tend to be losers.
https://x.com/CovfefeAnon/status/1935418815363358742
Rationalists suffer from a kind of 'productive insulation' because they understand systems far better than people, whereas the opposite is generally the case in the larger general population. Makes for easy access to a kind of effective arbitrage with a high, but brutal ceiling.
That said, this problem is quite easily corrected, at least to a certain degree. Take them out of Fairfax or Berkley, etc. and force them to live by only the fruit of their own wits in Bozeman or Indianapolis, or even Chengdu or Kolkata, etc. for three years. They would be forced to either drop their pretensions or suffer an inconceivable amount of cognitive dissonance.
Rationalists are the best counterexample to the assumption that IQ is all that matters.
I was about to say they're too clever for their own good, but true intelligence begins when you realize you've been indulging in intellectual self-gratification.