Your beliefs survived their own funerals
In 1975, psychologists Lee Ross, Mark Lepper, and Michael Hubbard ran an experiment at Stanford that should disturb anyone who thinks they update on evidence. They gave participants false feedback on a test — some were told they performed well, others poorly. Then the researchers did something unusual: they told the participants the truth. The feedback was fake. Completely fabricated. Random assignment. Nothing about the scores reflected actual ability.
It didn't matter.
Participants who received positive fake feedback continued to rate themselves as above average at the task. Those who received negative fake feedback continued to rate themselves as below average. The belief persisted after its entire evidentiary basis was removed. Not reduced. Removed. The participants knew the evidence was false, acknowledged it was false, and still operated as if it were true (Ross, Lepper, & Hubbard, 1975).
This is schema inertia — the tendency of established mental models to persist even when the evidence that created them has been discredited, contradicted, or outright destroyed. It is not stubbornness. It is not ignorance. It is a structural property of how schemas work. Once a schema forms, it generates its own gravitational field. And escaping that field requires far more force than creating it in the first place.
Why schemas resist: the asymmetry of assimilation and accommodation
Jean Piaget identified two fundamental processes by which humans integrate new information. Assimilation fits new data into existing schemas — you interpret the unfamiliar through familiar categories. Accommodation modifies or replaces the schema itself to account for data that won't fit.
Here is the critical asymmetry: assimilation is cheap and accommodation is expensive.
Assimilation requires no structural change. Your existing schema does the work. A hiring manager who believes "great engineers come from top-tier universities" sees a stellar candidate from a lesser-known school and thinks, "They must be the exception that proves the rule." The schema absorbs the contradicting evidence without changing. The manager's internal model remains intact. No cognitive energy spent rebuilding anything.
Accommodation demands that you dismantle a load-bearing structure while you're standing on it. You have to recognize that your schema is wrong, figure out what should replace it, rebuild the replacement while the old one is still generating your predictions and reactions, and then actually start operating from the new model. This is cognitively expensive, emotionally threatening, and — because schemas are interconnected — it rarely stops at one. Change one belief about how talent works and you may need to change your beliefs about your own hiring track record, about the value of your own credentials, about the stories you tell about meritocracy.
Piaget understood that the mind always prefers assimilation. Accommodation only happens when assimilation fails repeatedly and the resulting disequilibrium becomes too uncomfortable to ignore. The mind will bend, reinterpret, minimize, and rationalize evidence for as long as possible before it will restructure.
The machinery of persistence: confirmation bias as schema defense
Raymond Nickerson's landmark 1998 review, "Confirmation Bias: A Ubiquitous Phenomenon in Many Guises," documented the mechanisms by which schemas defend themselves against threatening information. The bias operates at every stage of information processing:
Selective search. You don't look for evidence randomly. You look where your schema predicts you'll find support. A manager who believes remote workers are less productive will notice every time a remote employee misses a message. They won't track how many office employees spend two hours in unproductive meetings.
Biased interpretation. Identical evidence gets read differently depending on which schema is doing the reading. In a classic demonstration, participants with opposing views on capital punishment were shown the same mixed-evidence study. Both sides reported that the study supported their existing position. Same data. Opposite conclusions. Each schema assimilated the evidence as its own (Lord, Ross, & Lepper, 1979).
Asymmetric scrutiny. Evidence that confirms your schema gets a pass. Evidence that threatens it gets interrogated. "Interesting study" becomes your response to confirming evidence. "What was the sample size? Who funded this? Is this replicable?" becomes your response to disconfirming evidence. The scrutiny isn't wrong — it's the asymmetry that constitutes the bias.
Selective recall. Over time, you remember the evidence that supports your schema and forget the evidence that contradicts it. Your memory is not a neutral archive. It is a schema-shaped filter.
These aren't separate biases. They are a coordinated defense system. Your schema doesn't passively sit there waiting for evidence to update it. It actively curates the evidence you encounter, shapes how you interpret what you find, and edits what you remember afterward.
Science advances one funeral at a time
Max Planck observed that "a new scientific truth does not triumph by convincing its opponents and making them see the light, but rather because its opponents eventually die, and a new generation grows up that is familiar with it." This sounds like cynicism. It's closer to an empirical description.
In 2019, Pierre Azoulay, Christian Fons-Rosen, and Joshua Graff Zivin tested Planck's claim directly. They studied what happens when eminent scientists die prematurely — 452 elite researchers in the life sciences who died between 1975 and 2003. The results confirmed schema inertia at the field level.
After a star scientist died, publications by their collaborators dropped by approximately 40 percent. But publications by non-collaborators — outsiders to the subfield — increased by 8 percent. Within five years, the outsiders' contributions fully offset the collaborators' decline. And critically, these new contributions drew on different scientific foundations and were disproportionately likely to become highly cited.
The mechanism was not that the dead scientists had used institutional power to block competitors. Few served as journal editors or grant committee members. Instead, the researchers concluded that "outsiders were reluctant to challenge the leadership within research areas in which an elite scientist was active." The star's schema — their paradigm, their framework for what counted as legitimate research — created a gravitational field that repelled alternative approaches. The schema persisted not through formal gatekeeping but through the informal social architecture of intellectual authority.
This is schema inertia operating at collective scale. An entire field's mental model resisting change until the strongest nodes in the network are physically removed.
Organizations: where schemas go to calcify
If schema inertia is powerful in individuals and measurable in scientific communities, it is devastating in organizations. John Kotter, studying organizational transformation over thirty years, estimated that more than 70 percent of needed change "either fails to be launched, even though some people clearly see the need, fails to be completed even though some people exhaust themselves trying, or finishes over budget, late, and with initial aspirations unmet" (Kotter, 2008). A 2008 IBM survey of over 1,500 change practitioners found that 59 percent of projects missed at least one objective or failed completely. McKinsey research found that less than one-third of transformations succeeded at both improving performance and sustaining the improvement.
These aren't failures of strategy or resources. They are failures of schema modification at scale. An organization's operating schemas — "this is how we do things," "this is what our customers want," "this is what makes someone successful here" — are distributed across thousands of people, embedded in processes, reinforced by incentive structures, and defended by the same confirmation biases that operate in individuals. Changing one person's schema is hard. Changing a schema that lives across an entire organization, with each person's version reinforcing everyone else's, is a coordination problem that most change efforts never solve.
The pattern is identical to individual belief perseverance, just larger. Present the evidence that the old model is failing. Watch as the organization explains it away. Watch as selective attention highlights the parts of the business that still work under the old schema. Watch as contradicting evidence is scrutinized into irrelevance while confirming evidence is accepted on faith.
The Third Brain parallel: catastrophic forgetting
Artificial neural networks exhibit their own version of schema inertia — a phenomenon researchers call catastrophic forgetting. When a neural network is trained on a new task, it adjusts its internal weights to perform well on the new data. But those same weight adjustments can destroy the network's ability to perform previously learned tasks. The network's "schema" for the old task gets overwritten by the new one.
This is the opposite problem from human schema inertia — but it reveals the same underlying tension. In human cognition, old schemas resist change too strongly; you can't update even when you should. In standard neural networks, old schemas are too fragile; they get obliterated by new learning. Both systems struggle with the fundamental challenge: how do you preserve what works while updating what doesn't?
Kirkpatrick et al. (2017) developed a solution called Elastic Weight Consolidation (EWC) that mirrors what healthy human cognition should do. EWC identifies which parameters (weights) in the network are most important for previously learned tasks, then adds a penalty that resists changes to those specific weights. The network can still learn new things — but the weights that matter most for old knowledge are protected, pulled back toward their original values by an amount proportional to their importance.
The insight is architectural: the solution to catastrophic forgetting isn't to prevent all change or to allow all change. It's to make the system aware of which parts of its existing model are load-bearing and which are flexible. Human schemas lack this meta-awareness by default. You don't know which of your beliefs are structurally critical and which are peripheral. So your mind protects everything with equal ferocity — which means your operating schema for how the world works gets the same defensive treatment as your preference for a particular brand of coffee.
Building epistemic infrastructure means developing the ability to do what EWC does computationally: identify which schemas are load-bearing and protect them appropriately, while allowing genuine updating of schemas that need revision.
Why "just be open-minded" fails
The standard advice for combating belief perseverance is some version of "be open to new evidence" or "consider the other side." This fails for structural reasons, not motivational ones.
The problem is not that people refuse to consider contradicting evidence. Ross, Lepper, and Hubbard's participants were fully informed that the evidence was false. They considered it. They acknowledged it. And their schemas persisted anyway. The problem is that by the time a schema forms, it has already generated explanations, stories, and causal models that survive independently of the original evidence. You don't believe your colleague is uncommitted because of the one time they left early. You believe it because the schema generated a story — "they don't really care about the team" — and that story now has its own evidential support: you remember every late reply, every time they didn't volunteer for a task, every meeting where they seemed checked out. The original evidence could vanish and the schema-generated evidence would keep the belief alive.
This is exactly what Ross, Lepper, and Hubbard found. In their experiments, the only intervention that actually eliminated perseverance was process debriefing — explicitly walking participants through the mechanism by which the false feedback had generated supporting explanations. Not "the evidence was false" (that didn't work). But "here is how your mind constructed additional support for the belief after the initial evidence was planted." The schema had to be shown its own machinery.
The protocol: detecting and testing schema inertia
Schema inertia cannot be defeated by willpower. It requires a structured process.
1. Name the schema explicitly. If you can't articulate it, you can't examine it. "People who work remotely are less productive" is a schema. Write it down. The previous lesson (L-0212) established that language encodes schemas — use that principle. If the schema lives only as a vague feeling or automatic reaction, it is invisible to your reasoning and impervious to evidence.
2. Identify the schema's origin evidence. When did you first form this belief? What evidence created it? You may find the original evidence was thin — a single experience, a secondhand story, a cultural default you absorbed without scrutiny.
3. Map the schema's self-generated evidence. What confirmations has the schema produced since it formed? These are the interpretations, selective memories, and biased searches the schema has conducted on its own behalf. This is the process debriefing that Ross et al. found actually works — showing the schema its own defense mechanisms.
4. Run the falsification test. Ask: what evidence would change my mind? If you cannot specify any evidence that would update the schema, you are not holding a belief. You are holding an axiom — and you should be honest with yourself about that.
5. Seek disconfirming evidence actively. Don't wait for it to arrive. Your confirmation bias ensures it won't. Go find evidence that contradicts your schema. Interview someone who holds the opposite view. Read the strongest argument against your position. Track the data your schema has been ignoring.
6. Test predictions, not explanations. Schemas are excellent at generating post-hoc explanations for anything that happens. They are less skilled at making accurate predictions. If your schema says remote workers are less productive, make a specific prediction: "Remote team members will complete fewer sprint tasks this quarter." Then track the actual results. Prediction-testing bypasses the interpretation machinery that makes schemas self-sealing.
What this makes possible
Schema inertia is not a flaw to be eliminated. It is a property to be managed. Schemas should be somewhat resistant to change — if every piece of contradicting evidence instantly restructured your mental models, you would never develop stable, useful frameworks for navigating the world. The problem is not that schemas resist change. The problem is that they resist change indiscriminately — bad schemas and good schemas alike get the same defensive treatment.
The next lesson (L-0214) examines what happens when schema inertia finally breaks — the experience of schema shock when reality delivers evidence so undeniable that even a well-defended schema cannot assimilate it. That moment of disorientation is not damage. It is the signal that accommodation is finally possible.
But you cannot navigate schema shock productively unless you first understand the forces that prevent it. Inertia is the default. Updating is the exception. And building a mind that can tell the difference between a schema worth defending and a schema worth dismantling — that is the work.
Sources
- Ross, L., Lepper, M. R., & Hubbard, M. (1975). Perseverance in self-perception and social perception: Biased attributional processes in the debriefing paradigm. Journal of Personality and Social Psychology, 32(5), 880-892.
- Nickerson, R. S. (1998). Confirmation bias: A ubiquitous phenomenon in many guises. Review of General Psychology, 2(2), 175-220.
- Lord, C. G., Ross, L., & Lepper, M. R. (1979). Biased assimilation and attitude polarization: The effects of prior theories on subsequently considered evidence. Journal of Personality and Social Psychology, 37(11), 2098-2109.
- Azoulay, P., Fons-Rosen, C., & Graff Zivin, J. S. (2019). Does science advance one funeral at a time? American Economic Review, 109(8), 2889-2920.
- Kirkpatrick, J., et al. (2017). Overcoming catastrophic forgetting in neural networks. Proceedings of the National Academy of Sciences, 114(13), 3521-3526.
- Kotter, J. P. (2008). A Sense of Urgency. Harvard Business Press.
- Piaget, J. (1952). The Origins of Intelligence in Children. International Universities Press.