The decision you refuse to revisit
Three years ago, you made a decision. It was a good decision at the time. You evaluated the available evidence, weighed the options, chose a path, and it worked. Maybe it was a career strategy, a management philosophy, a framework for understanding your market, a theory about what your customers want. Whatever it was, it delivered results — and you internalized it. It became a schema: a mental model you use automatically, without re-evaluating, every time a similar situation arises.
The problem is not that you made that decision. The problem is that you are still making it. The market shifted. The technology changed. The team turned over. The evidence that supported the original schema eroded piece by piece, and you did not notice — because you stopped looking. You stopped looking because the schema felt like knowledge rather than a hypothesis. It graduated from "best current approach" to "the way things work" without passing through any validation checkpoint.
This is cognitive rigidity: the failure to update mental models in response to changed conditions. It is not a dramatic failure. It is a quiet one. A slow drift between your map and the territory, widening by degrees until the day you discover you have been navigating by a map that no longer resembles the landscape. This lesson examines why that drift happens, what it costs, and why the cost compounds rather than stays flat.
Cognitive rigidity: the psychology of the locked schema
Cognitive rigidity in psychology refers to an inability to modify concepts, attitudes, or behavioral patterns once they have been established. The concept has deep roots. Milton Rokeach's The Open and Closed Mind (1960) formalized it through his theory of dogmatism — a construct describing a "relatively closed cognitive system of beliefs and disbeliefs about reality, organized around a central set of beliefs about absolute authority." Rokeach showed that highly dogmatic individuals do not merely hold strong opinions. They exhibit a structural resistance to integrating contradictory information, treating disconfirming evidence as a threat rather than as data.
This is not a niche clinical phenomenon. Cognitive rigidity exists on a spectrum, and everyone occupies a position on it — a position that can shift depending on the domain, the emotional stakes, and the degree to which your identity is invested in the schema in question. You may be flexible about your dietary beliefs and entirely rigid about your management philosophy. You may readily update your understanding of technology and refuse to update your model of how relationships work. Rigidity is domain-specific, which is part of what makes it so difficult to detect: you can be genuinely open-minded in six areas and completely locked down in the seventh — the one that matters most.
Research in cognitive psychology has identified several mechanisms that maintain rigidity. Perseverative cognition — the tendency to continue engaging with a mental representation even when it is no longer useful or accurate — has been linked to both cognitive and autonomic inflexibility. A 2015 study published in Biological Psychology found that perseverative thinking is mirrored by reduced physiological flexibility, suggesting that cognitive rigidity is not merely an intellectual habit but a whole-system pattern that affects how you process information at every level.
More recently, 2025 research in the International Journal of Psychological Studies has highlighted the relationship between cognitive control and decision-making quality. Cognitive stability — the ability to maintain focus on a goal — is valuable, but when it becomes excessive, it tips into rigidity. You stop adapting to new information because your cognitive system is locked onto the existing model. The result is not just suboptimal thinking. It is increasingly suboptimal thinking, because each new piece of unprocessed evidence widens the gap between schema and reality.
The Einstellung effect: when expertise becomes a trap
The most precisely documented form of schema rigidity is the Einstellung effect — the finding that a known solution blocks the discovery of a better one. The term comes from Abraham Luchins's 1942 water jar experiments. Participants were given a series of problems that could all be solved with the same formula: B minus A minus 2C. After several rounds, they were presented with a problem that could be solved the same way — but also had a much simpler solution. The majority of participants used the complex formula they had practiced, even though a two-step solution was available. When Luchins gave the warning "Don't be blind," more than half immediately found the simpler path. The solution was visible. It was not that they could not see it. It was that the established schema was directing their attention away from it.
The effect is not limited to laboratory puzzles. Merim Bilalic and colleagues demonstrated in a landmark 2008 paper in Cognition that expert chess players fall into the same trap. When presented with problems where a familiar motif suggested one solution but a better solution existed, players reported that they were searching for the optimal move. But eye-tracking data told a different story: their gaze kept returning to features of the board related to the familiar solution. The first schema that activated commandeered their attention, and they literally could not see the better alternative — not because it was hidden, but because their existing schema was consuming the perceptual resources needed to find it.
This is the mechanism by which rigid schemas degrade decision quality. They do not make you unable to think. They make you unable to see. Your attention is drawn to evidence consistent with the existing schema and away from evidence that would reveal a better one. The more expertise you have invested in the schema, the stronger this attentional capture becomes. Expertise, in this sense, is a double-edged sword: it gives you powerful mental models, but it also gives those models powerful control over what you perceive. The Einstellung effect explains why the cost of rigidity is highest for exactly the schemas you trust the most.
The compounding cost: why rigidity gets worse over time
A one-time bad decision has a fixed cost. You lose whatever you lost, and you move on. But a rigid schema does not produce a one-time bad decision. It produces a systematic bias — a repeating pattern of suboptimal choices that compounds over time.
Consider the compounding mechanism. At time zero, your schema matches reality reasonably well. The gap between model and territory is small, and your decisions are approximately correct. At time one, the environment shifts slightly. Your schema does not update. The gap widens by a small amount — perhaps not enough to produce a noticeably bad outcome, but enough to make your decisions slightly less optimal than they could be. At time two, the environment shifts again. Your schema, still unchanged, is now two increments behind. Each decision you make is a little worse than the last, not because you are becoming less intelligent but because you are applying an increasingly outdated model to a progressively different reality.
This is the structure of escalation of commitment, one of the most robust findings in organizational psychology. Barry Staw's research demonstrated that decision-makers who are personally responsible for a course of action tend to escalate their commitment to it even in the face of negative feedback — investing more resources to justify the original decision rather than acknowledging it was wrong. The rigid schema does not merely persist. It recruits additional resources to defend itself. You spend cognitive and material resources not on finding the right answer but on proving that your existing answer was right all along.
The mechanism is reinforced by confirmation bias. As Ziva Kunda showed in her 1990 paper on motivated reasoning, people with a directional motivation direct their cognitive resources toward constructing justifications for their preferred conclusion. Once you are committed to a schema, you do not evaluate new evidence neutrally. You scrutinize evidence that threatens the schema and accept evidence that supports it with minimal examination. This means that the longer a schema persists unchanged, the more distorted your evidence base becomes — and the more confident you feel about a model that is growing less accurate by the day.
The financial metaphor is precise: rigidity imposes a tax that compounds. A 5% suboptimality in year one becomes a 10% suboptimality in year two — not because the tax rate increased, but because the gap between your model and reality widened while you continued to make decisions as if the gap did not exist.
The graveyard of rigid organizations
The most dramatic illustrations of schema rigidity come from corporate history, where the costs are quantifiable and the outcomes are irreversible.
Kodak invented the digital camera in 1975. Engineer Steve Sasson built a working prototype and presented it to management. The schema that governed Kodak's strategy was "we are a film company." Digital photography threatened that schema, so Kodak's leadership did not treat Sasson's invention as an opportunity. They treated it as a threat — to their existing business model, their infrastructure, their identity. For the next three decades, Kodak made increasingly desperate attempts to defend the film schema. They interpreted declining film sales as market fluctuations rather than structural change. Middle managers, steeped in the culture and chemistry of physical film, could not make the cognitive transition to thinking digitally. By the time Kodak attempted a pivot, their competitors had built insurmountable leads. Kodak filed for bankruptcy in 2012. The schema that made them an industry leader became the schema that destroyed them — not because it was wrong when adopted, but because they refused to update it when the evidence demanded revision.
Blockbuster declined to acquire Netflix for $50 million in 2000. The schema was "customers want the experience of browsing a physical store." That was a defensible interpretation of customer behavior in 2000 — but it was a snapshot, not a law. Blockbuster's leadership treated it as a law. They doubled down on retail locations while Netflix invested in mail-order delivery and then streaming infrastructure. By 2010, Blockbuster had filed for bankruptcy. The cost of their rigid schema was not the $50 million they declined to spend on Netflix. It was the entire company.
Nokia dominated the mobile phone market in the early 2000s, operating under the schema "hardware quality and carrier relationships determine mobile phone success." When the iPhone launched in 2007, Nokia's leadership dismissed it as a niche product for technology enthusiasts. By the time Nokia attempted to respond — rejecting Android in 2011 in favor of the Microsoft partnership — the smartphone revolution had already passed them by. Nokia's global market share dropped from over 40% to less than 3%.
Clayton Christensen formalized this pattern in The Innovator's Dilemma (1997), demonstrating that successful companies fail not because they are poorly managed but because they are well managed — according to schemas that have become obsolete. The very processes, values, and resource-allocation frameworks that drove their success become the rigid structures that prevent adaptation. Christensen showed that incumbents apply mental models rooted in their existing competencies, business models, and customer relationships to evaluate new technologies — and those mental models systematically undervalue disruptive innovations because disruptions do not fit the existing schema.
The lesson is not that these companies were run by incompetent people. They were run by skilled people operating on schemas that had stopped matching reality. The cost of rigidity is not proportional to the quality of the people. It is proportional to the speed of environmental change multiplied by the duration of schema non-revision.
AI and the Third Brain: overfitting as rigidity
In machine learning, the precise analog of schema rigidity is overfitting — the condition where a model has learned the training data so well that it fails to generalize to new data. An overfitted model has, in effect, memorized the specific patterns of its training environment rather than learning the underlying regularities. When the environment changes — when new data arrives that differs from the training distribution — the overfitted model does not degrade gracefully. It fails catastrophically, producing confident but wildly incorrect predictions.
The parallel to human cognitive rigidity is structural. An overfitted model, like a rigid schema, performs well in exactly the conditions it was trained on. As long as the world looks like the training data, the model appears highly accurate. But this accuracy is an illusion of stability. The model has traded generalizability for performance on familiar inputs, and the moment unfamiliar inputs arrive, the trade-off is revealed. Google's machine learning documentation describes an overfitted model as one that "matches the training set so closely that the model fails to make correct predictions on new data." Replace "training set" with "past experience" and you have a definition of cognitive rigidity.
The machine learning field has developed systematic techniques for preventing and correcting overfitting that map directly onto epistemic practice. Regularization — constraining a model's complexity so it cannot memorize noise — is the technical equivalent of maintaining intellectual humility: deliberately limiting how tightly your schemas conform to past experience so they retain the flexibility to handle new situations. Cross-validation — testing a model on data it was not trained on — is the equivalent of seeking out unfamiliar evidence and disconfirming experiences.
For your Third Brain — the AI-augmented knowledge infrastructure you are building — this has direct practical implications. If you train an AI assistant on your own past decisions and preferences, you risk building a system that overfits to your historical schemas. It will optimize for the patterns you have already established rather than helping you discover better ones. The AI becomes a rigidity amplifier: a tool that makes your existing schemas faster and more efficient to apply, without ever questioning whether those schemas should be applied at all. The corrective is to deliberately use AI for schema-challenging purposes — asking it to generate counterarguments, identify assumptions, and present alternatives you would not have considered. An AI that confirms your schemas is a productivity tool. An AI that challenges your schemas is an epistemic tool. The difference is the difference between efficiency and accuracy.
Protocol: detecting and reducing your schema rigidity
Schema rigidity is difficult to detect from the inside because the rigid schema shapes what you perceive as relevant evidence. You cannot see the gap between map and territory when the map is the only lens you are looking through. This protocol creates external checkpoints that force the question.
Step 1: The zero-base test. For each major schema that drives your decisions, ask: "If I were encountering this situation for the first time today, with no prior commitment, would I adopt this same belief?" If the answer is no — if you would choose differently as a newcomer — you have identified a schema that persists through inertia rather than through current validity. This is the clearest signal of rigidity: a gap between what you would choose fresh and what you continue to choose out of habit.
Step 2: The environmental delta. List the three to five most significant changes in the environment your schema operates in since you adopted it. New technologies, new competitors, new evidence, new social dynamics, new personal circumstances. For each change, ask: "Did my schema update to incorporate this?" If the environment has changed materially and your schema has not changed at all, the probability that your schema is still optimally calibrated is low.
Step 3: The Einstellung check. When solving a problem, deliberately pause after your first solution comes to mind. Ask: "Is this the best solution, or is it merely the most familiar one?" Recall that Bilalic's chess research showed that even experts who believed they were looking for better solutions were actually directing their attention toward features of the familiar one. The only reliable corrective is to explicitly set aside the first solution and search for alternatives before committing.
Step 4: The cost estimate. For each rigid schema you identify, estimate the concrete cost you have paid for not updating it. Missed opportunities, suboptimal outcomes, friction with people who have already updated their models, time spent defending a position you privately suspect is outdated. Put a number on it where you can. The purpose is not precision. It is making the abstract cost of rigidity concrete enough to motivate revision.
Step 5: Schedule the update. Rigidity persists partly because revision has no deadline. There is never a moment when the calendar forces you to re-examine a schema — so you never do. The corrective is to create one. For each schema you have identified as potentially rigid, set a specific date for a formal re-evaluation. Not "sometime soon." A date. This is the first step toward the schema evolution log you will build in L-0317.
The tax you are already paying
Every rigid schema you hold imposes a tax on every decision that schema touches. The tax is not announced. It does not appear on an invoice. It accumulates silently — in opportunities you did not pursue because your model said they would not work, in problems you misdiagnosed because your framework could not accommodate the actual cause, in relationships you managed according to assumptions that stopped being accurate years ago.
L-0315 explored when schemas need revolutionary replacement versus incremental evolution. This lesson has made the case that delaying either form of update is not a neutral choice. Delay has a cost, and that cost compounds. A schema that was 5% miscalibrated last year may be 15% miscalibrated this year — not because the schema got worse, but because reality kept moving while the schema stood still.
The solution is not to abandon all your schemas in a panic of self-doubt. Most of your schemas are probably adequate, and some are excellent. The solution is to build the infrastructure that makes regular schema revision a normal part of your cognitive practice rather than a crisis response triggered by catastrophic failure. That infrastructure begins with a simple but powerful tool: a record of how your schemas have changed over time — and when they have not. L-0317 introduces the schema evolution log: a systematic practice for tracking the living history of your mental models, so that rigidity becomes visible before its costs become unbearable.
Sources
- Rokeach, M. (1960). The Open and Closed Mind: Investigations into the Nature of Belief Systems and Personality Systems. Basic Books.
- Luchins, A. S. (1942). Mechanization in problem solving: The effect of Einstellung. Psychological Monographs, 54(6), 1-95.
- Bilalic, M., McLeod, P., & Gobet, F. (2008). Why good thoughts block better ones: The mechanism of the pernicious Einstellung (set) effect. Cognition, 108(3), 652-661.
- Staw, B. M. (1976). Knee-deep in the big muddy: A study of escalating commitment to a chosen course of action. Organizational Behavior and Human Performance, 16(1), 27-44.
- Kunda, Z. (1990). The case for motivated reasoning. Psychological Bulletin, 108(3), 480-498.
- Christensen, C. M. (1997). The Innovator's Dilemma: When New Technologies Cause Great Firms to Fail. Harvard Business School Press.
- Ottaviani, C., et al. (2015). Cognitive rigidity is mirrored by autonomic inflexibility in daily life perseverative cognition. Biological Psychology, 107, 24-30.
- Google Developers. Machine Learning Crash Course: Overfitting.