There Is No Altruism
The concept error that manufactured its own mystery
I. The Word Has a Birthday
The word "altruism" was invented by Auguste Comte in the 1850s. He derived it from the Italian altrui (others) and constructed it as the structural opposite of "egoism" within his Religion of Humanity. His ontology was materialist phrenology: the brain contained regions for "personality" (egoistic, posterior) and "sociability" (sympathetic, anterior). Altruism was the triumph of the front brain over the back brain.
Before Comte, Western moral discourse didn't have the word. It didn't need it. The operative concept was caritas — charity in the original sense — embedded in teleological frameworks where helping others was constitutive of your own flourishing. You helped because that was part of what it meant to live well. No sacrifice required. No mystery to explain.
Comte installed a new ontology: atomistic individuals with separable interests, where helping is zero-sum transfer from A to B. Once you accept this frame, "altruism" becomes simultaneously necessary (how will isolated atoms cooperate?) and mysterious (why would they?). The puzzle is manufactured by the definition.
This matters because the entire intellectual infrastructure built on "altruism" — from evolutionary biology's kin selection debates to Peter Singer's drowning child to Effective Altruism — is downstream of a concept coined 170 years ago by a man who believed in phrenology. The foundation is a neologism with a specific (false) ontological presupposition. Remove the presupposition and the puzzle vanishes.
II. Four Billion People Who Never Had This Problem
The "mystery of altruism" is provincial. Most human civilizations never encountered it, because they never installed the ontology that creates it.
Ubuntu: "A person is a person through other persons." If the self is constituted by its relationships, there is no self/other boundary across which "sacrifice" could occur. Helping your community is self-maintenance. The puzzle doesn't arise.
Confucian role ethics: The character ren (仁) is routinely mistranslated as "benevolence" or "altruism." Roger Ames and Henry Rosemont show it actually means "co-humanity" — the quality of the relationship, not a transfer between separate entities. "If one's family is oneself, feeding one's child is not altruism — it is self-maintenance."
Buddhist anatta: No-self doctrine. "Why would Self A sacrifice for Self B?" Buddhism: "There is no Self A or Self B." The Bodhisattva acts from wisdom of non-duality, not from "altruism." Helping is as natural and unmysterious as a hand removing a splinter from a foot.
Marcel Mauss, The Gift (1925): In relational societies, the "free gift" — pure altruism with no expectation of return — is an "ideological impossibility." Gift circulation is social metabolism. Refusing to give is not "selfishness." It is declaring war — severing the relationships that constitute you.
These aren't exotic philosophies to be admired from a distance. They are existence proofs. Entire civilizations — billions of people across millennia — functioned without the "altruism puzzle" because they never installed the ontology that creates it. The puzzle is not universal. It is a local artifact of post-Enlightenment European individualism, given a name in 1851.
III. Biology Tried to Solve a Problem That Doesn't Exist
Evolutionary biology spent decades trying to explain altruism. The attempts are revealing — not for what they found, but for what they assumed.
Kin selection (Hamilton): Organisms help relatives because they share genes. This "explains" altruism by reducing it to disguised selfishness at the gene level. The gene is selfish; the organism appears altruistic. But as David Stove pointed out, using human moral terms like "selfish" and "altruistic" for chemical replication is a massive category error — confusing cause with motive.
Reciprocal altruism (Trivers): Organisms help non-relatives because they expect reciprocation. This "explains" altruism by converting it to delayed exchange. Already closer to the truth — but still trapped in the frame where the mystery needs explaining.
Multilevel selection (Wilson & Sober, Unto Others): Selection operates at multiple levels simultaneously. Groups with cooperating members outcompete groups without. And here something interesting happens: Wilson and Sober show that as selection concentrates at the group level, "the self-sacrificial component of altruism disappears." They don't just explain altruism differently. They dissolve it. At the group level, there is no sacrifice — there is system function.
Superorganism theory (Hölldobler & E.O. Wilson): The colony is the individual. The worker is soma. A worker bee stinging an intruder is not "altruistic" — it is an immune system response. Applying the concept of altruism to worker castes is, in their words, "a category error derived from anthropocentric individualism."
The biological debate converges on the same point the philosophical and anthropological evidence already established: altruism is an artifact of the wrong unit of analysis. Start from the individual and helping is mysterious. Start from the system and helping is maintenance. The "mystery" was manufactured by the choice of starting point.
IV. What "Altruism" Actually Is
Under any ontology where the individual is not the sole unit of analysis — which is most ontologies humans have ever held — the altruism/selfishness binary dissolves into a functional taxonomy:
Maintenance. Sustaining the system you are part of. Raising children. Contributing to community. Doing your work competently. Maintaining infrastructure. Transmitting knowledge. These are not "selfless" — they are what functioning nodes in a functioning system do. A liver cell processing toxins is not "altruistic toward the organism." It is doing its job. The evolved emotional drives that produce maintenance behavior — care for family, loyalty to community, pride in craft — are the system's coordination signals working correctly. They are not "biases to override."
Parasitism. Extracting from the system without maintaining it. Free-riding on commons. Rent-seeking. Cost externalization. A cancer cell is not "evil" — it is a cell that stopped maintaining the organism and started extracting. The mechanism, not the morality, is what matters.
Displacement. Redirecting maintenance energy away from the system you are part of toward a system you have no connection to, no mechanism knowledge of, and no feedback from. This is the Effective Altruism move: take the maintenance energy that would naturally build local social, institutional, and human capital, and redirect it to distant interventions in systems you don't understand. The local capital stocks deplete. The distant intervention may or may not work. Nobody measures the net.
These three categories replace the altruism/selfishness binary with something that actually carves reality at the joints. Maintenance is not selfless. Parasitism is not "selfish choice." Displacement is not noble sacrifice. They are functional descriptions of what energy is doing in a system.
V. Sacrifice Is Exchange
"He sacrificed for his children." What does this actually mean?
The parent works a second job, forgoes career ambitions, sleeps less, spends money on kids instead of himself. The cost is real. He really does suffer. But notice what is always true in ordinary usage of "sacrifice":
- It is local — his children, not strangers
- It is relational — he IS the father; the relationship constitutes his identity
- It is system maintenance — investing in the future of his telic system
- It is driven by correctly calibrated evolved signals — parental care
"Sacrifice" in ordinary language means: accepted a visible, concrete cost for a benefit that accrues to something you are part of. The cost is legible (lost sleep, forgone career). The return is illegible (children's future, family continuity). This is investment with illegible returns. "Sacrifice" is the word we use when the ROI is invisible or goes to the system rather than the node.
But there is a deeper problem. Sacrifice — in the sense of "giving up something you'd rather keep" — is incoherent for voluntary action. If the giver wants to give, then giving provides something they value more than what they gave up. Meaning. Identity. Status. Warm glow. Community standing. These are real psychological goods. The "sacrifice" is the purchase price.
This is not cynicism. The psychological goods are real and often correctly calibrated — the parent's drive to invest in children encodes genuine system-maintenance priorities. But it means that voluntary "sacrifice" is always exchange: resources for meaning. The sacrifice frame obscures what is being purchased.
Effective Altruism exploits this confusion. It takes the word "sacrifice" — which in ordinary language always describes relational maintenance (local, embodied, with feedback) — and applies it to donating money to strangers in systems you will never see. The emotional weight of "sacrifice for his children" gets transferred onto "donate to GiveDirectly." Same word. Completely different mechanism.
The etymology confirms this. Sacrifice = sacri + facere = "to make sacred." It was originally a transaction with the divine — giving something valuable to maintain a relationship with the sacred. Not selfless loss. Transactional maintenance of the most important relationship the person had.
VI. The Drowning Child Is a Card Trick
Peter Singer's famous thought experiment: if you would ruin your expensive suit to save a drowning child in front of you, why wouldn't you donate the suit's cost to save a distant child?
The argument works by stripping away every feature that distinguishes local from distant action, then declaring the distinction morally irrelevant. But the stripped features are not morally irrelevant — they are epistemically load-bearing.
When you save the drowning child, you have:
- Mechanism knowledge — you can see the child, the water, your own ability to help
- Feedback — you will know immediately whether it worked
- No risk of making things worse — pulling a child from a pond doesn't destabilize local economies or fund warlords
- Certainty of effect — your action is deterministic, not probabilistic
When you donate to save a distant child, you have none of these. You don't know the mechanism. You can't verify the outcome. Your money enters a bureaucratic chain where it may fund administration, create dependency, undercut local production, or strengthen the regime that caused the problem. The expected value calculation that Singer relies on ("even if uncertain, the magnitude justifies it") is precisely the reasoning pattern that produced Sam Bankman-Fried.
As Leif Wenar formalized in "Poverty is No Pond": the drowning child analogy hides moral risk. The rescuer at the pond faces no risk of making things worse. The distant donor faces this risk routinely. Singer treats distance as a psychological variable (proximity bias). It is an epistemic variable — the further away, the less you know, the less feedback you have, the lower your expected return on intervention.
The drowning child is a card trick. It asks you to agree to a principle in a context where it obviously works (local, visible, deterministic), then applies the same principle in a context where it obviously doesn't (distant, opaque, probabilistic). The sleight of hand is in the move from one to the other.
VII. The GiveDirectly Paradox
The strongest empirical case for distant giving is GiveDirectly's unconditional cash transfers. Egger et al. (2022): $10 million injected into rural Kenya, $1,000 per household, fiscal multiplier of 2.5x, positive spillovers, negligible inflation.
Suppose it works perfectly. What happened?
Recipients used cash to buy goods, start businesses, hire labor. The multiplier is commerce. The cash transfer activated market transactions that weren't happening because the market was capital-constrained. It is a liquidity injection into a functioning but underserved economy.
If "efficient altruism" works, it works because it is commerce — just commerce with extra steps and without the feedback mechanisms that actual commerce provides. Price signals tell you what is needed. Profit and loss tells you whether value is being created. Competition forces efficiency. Ongoing trade builds the social and institutional capital that one-shot transfers do not.
The paradox:
- If it works → it works because it is commerce → the altruism frame is unnecessary
- If it doesn't work → the altruism failed
- There is no scenario where "altruism" is the correct description of the mechanism
Commerce is strictly superior to cash transfers because it includes the feedback architecture. Cash transfers are an open loop — inject money, hope for the best, measure a few variables after two years. Commerce is a closed loop — continuous price signals, continuous adaptation, continuous accountability.
The only context where cash transfers beat commerce is where barriers to commerce exist: conflict, institutional failure, extreme poverty traps where people lack the initial capital to participate in markets. But then the correct long-term intervention is: remove the barriers. Not substitute periodic cash injections from abroad forever.
GiveDirectly's success does not validate altruism. It validates markets. The best evidence for Effective Altruism dissolves the frame it was supposed to support.
VIII. The Error Stack
"Effective Altruism" is four layers of conceptual error stacked on top of each other.
"Effective" — effectiveness claims have replication problems (the deworming "Worm Wars"), usage gaps (bed nets at 56-68% actual usage after two years, with nets repurposed for fishing), and a quantification framework (QALYs) that misses most relevant variables. Social capital, institutional capacity, cultural transmission, community resilience — none of these appear in the QALY calculus. "Effective" relative to what full accounting?
"Altruism" — a 170-year-old neologism presupposing an ontology that is formally refuted by multilevel selection theory, absent from most human civilizations, and dissolved by any framework where the individual is not the sole unit of analysis.
"Sacrifice" — incoherent for voluntary action. If you want to give, you are purchasing psychological goods (meaning, identity, status, moral certainty) with money. The "sacrifice" is the price tag. EA-style giving is premium meaning-consumption: you get quantified impact numbers, community belonging, identity validation, and the warm glow of "I saved 3.2 lives." This is not sacrifice. It is shopping.
The combination: optimized exchange of money for meaning by isolated atoms, directed at distant strangers, measured in a unit that misses most of what matters, described as selfless sacrifice. Every word in the phrase loads the frame toward the conclusion before any argument begins.
IX. What Remains
If altruism doesn't exist, what does?
Maintenance. The parent raising children. The neighbor helping neighbors. The professional doing excellent work. The citizen maintaining local institutions. The teacher transmitting culture. Not selfless. Not mysterious. Functioning nodes in a functioning system, driven by evolved signals that encode real priorities. The emotional experience of caring — for family, community, craft, place — is the system's coordination mechanism working correctly.
Investment. Deploying resources where they build capacity. Local investment has superior information, feedback, mechanism knowledge, and aligned incentives. You know what your community needs. You can see whether it worked. You can correct when it doesn't. The capital you build is capital you benefit from. This is why the evolved emotional calibration (care about your local community) is correct — it directs investment toward the highest-return, lowest-uncertainty target.
Commerce. Treating the counterparty as a peer with something valuable to offer. Trade builds capacity because producing tradeable goods requires developing capability. It has feedback loops that work at scale. It doesn't feel like helping. It helps more than helping does.
Knowledge production. Creating understanding that enables better maintenance by others. Science, research, institutional analysis. Not displacement — amplification. Meta-maintenance with compounding returns.
The priority stack inverts the EA ordering. EA puts global optimization first and local maintenance last (or classifies local maintenance as a "bias" to overcome). The stack that follows from dissolving altruism: local maintenance first, institutional investment second, knowledge production third, targeted distant intervention last — and only when mechanism knowledge exists, feedback loops are available, full accounting is positive, and local maintenance is already handled.
X. The Two-Level Solution
One objection remains. If "altruism" and "sacrifice" are concept errors — if helping is really maintenance, investment, and exchange — does this framing demotivate helping? Is the correct description of the mechanism less motivating than the incorrect one?
Yes. Probably. The evidence suggests that sacrifice framing activates identity fusion and costly signaling — people incur massive costs for "sacred duty" but not for "smart transaction." Investment framing activates market norms, which crowd out intrinsic motivation. The parent who "sacrifices for her children" may invest more than the parent who "optimizes her family's capital stocks." The framing matters for motivation even when it is wrong about mechanism.
This is a known problem in moral philosophy. Henry Sidgwick called it "Government House Utilitarianism": sometimes the correct theory produces better outcomes when most people don't believe in it. Derek Parfit called these "self-effacing theories" — moral theories that, if true, recommend that you not believe them. The mechanism-level truth (maintenance, investment, exchange) should inform institutional designers, policy analysts, and system architects. The motivational frame (sacrifice, duty, care) can remain as individual psychology. Don't tell the white blood cell it is "doing maintenance." Design the immune system with mechanism knowledge. Let the cells follow their evolved drives.
This is not the "noble lie." It is recognizing that description operates at multiple levels. The physicist knows that a table is mostly empty space. She still puts her coffee on it. The parent knows, if she reflects, that her "sacrifice" builds the family system she is part of. She still experiences it as love. Both descriptions are true. They operate at different levels. Neither cancels the other.
The damage is not in experiencing helping as meaningful. The damage is in building institutions, policies, and billion-dollar resource allocation systems on a conceptual foundation that is 170 years old, ontologically false, and systematically misdirects resources from high-return local maintenance toward low-return distant displacement.
Feel the sacrifice. Design the system with mechanism knowledge. These are not in conflict.
Related reading:
- The Thermodynamics of Charity — The practical complement: why most charity creates entropy rather than order
- Values Aren't Subjective — The three categories of "values" and why conflating them enables evasion
- Flourishing Is Maximum Safety Margin — Why sustained complexity generation is the terminal value, not comfort
- Belonging Is Axiology — Why the relationships that constitute you are not optional add-ons