Your best understanding of reality is not reality
You have a model of how your company makes decisions. You have a model of how your codebase handles errors. You have a model of what your partner means when they say "I'm fine." Every one of these models is useful. Every one of them is wrong.
Not wrong in the sense of being bad. Wrong in the sense of being structurally incapable of capturing everything that matters. Your schema about a thing — any thing — is a compression. It preserves some features, discards others, and freezes the result into a representation that you then mistake for the thing itself.
This is the oldest and most consequential error in human cognition: confusing the representation with what it represents. And in 1933, a Polish-American philosopher named Alfred Korzybski gave it a name that has echoed through every field from cognitive science to software engineering: "A map is not the territory it represents, but, if correct, it has a similar structure to the territory, which accounts for its usefulness" (Korzybski, 1933).
That full statement matters. Most people remember only the first clause — maps aren't territories. But Korzybski's point was twofold: maps are always incomplete, and their usefulness depends on structural similarity to what they represent. Both claims carry weight. The incompleteness means you should never fully trust a schema. The structural similarity means you should still use one.
Korzybski's insight: abstraction always discards
Korzybski developed an entire discipline around this idea — general semantics — which argued that human knowledge of the world is limited both by the human nervous system and by the languages humans have developed. No one can have direct access to reality, because everything we know is filtered through layers of neurological and linguistic abstraction.
He visualized these layers as an "abstraction ladder." At the bottom is raw reality — an infinitely detailed, constantly changing process. One step up is what your nervous system selects from that process. Another step up is what you name. Another is what you categorize. By the time you have a word, a concept, or a schema, you have climbed several rungs, discarding detail at each one.
The danger isn't abstraction itself. You need abstraction to function. The danger is what Korzybski called identification — unconsciously treating the abstraction as if it were the thing. When you believe your mental model of a market is the market, or your schema of a colleague's motivations is their motivations, you've collapsed the abstraction ladder. You've confused map and territory so thoroughly that the gap between them becomes invisible.
And invisible gaps are the ones that hurt you.
Bateson: the territory never gets in
Gregory Bateson, the anthropologist and systems theorist, took Korzybski's insight further in Steps to an Ecology of Mind (1972). Where Korzybski warned that maps aren't territories, Bateson argued that the territory never gets in at all.
His reasoning was precise: when a cartographer makes a map, they go out with a retina or a measuring stick and make representations. Those representations get put on paper. But what is on the paper is a representation of what was in the retinal representation of the person who made the map. You never touch the territory directly. You only ever have maps of maps, representations of representations, all the way down (Bateson, 1972).
This sounds abstract until you apply it to daily cognition. Your understanding of last quarter's revenue isn't the revenue — it's your interpretation of a spreadsheet, which is itself a representation of database records, which are themselves representations of transactions that occurred in reality. Each layer filters. Each layer compresses. By the time you're making decisions, you're operating on a map of a map of a map.
Bateson added a crucial observation about what actually gets onto a map: difference. If the territory were uniform, nothing would be mapped except its boundaries. What gets represented is variation — differences in altitude, vegetation, population, performance. This means every schema is inherently a selection of contrasts. It captures what changes, not what stays the same. And the things that stay the same — the background, the context, the defaults — are exactly the things most likely to shift without anyone noticing.
Cognitive science: mental models are structural analogs, not copies
Philip Johnson-Laird's mental model theory (1983) provides the cognitive science mechanism behind the map-territory relation. When you reason about a situation, you don't apply formal logical rules to abstract propositions. Instead, you construct a mental model — an internal representation whose structure corresponds to the structure of the situation it represents.
Mental models are iconic: their parts and relations map onto parts and relations in reality, much like an architect's model maps onto a building. This structural analogy is what makes them useful. You can mentally "walk through" a model of your deployment pipeline and predict where a failure might occur, because the structure of your model mirrors the structure of the real system.
But here's the critical limitation that Johnson-Laird documented: people typically construct a single mental model and reason from it as if it were complete (Johnson-Laird, 1983). They don't spontaneously generate alternative models. They don't check whether their model omits relevant possibilities. They find one representation that feels adequate and treat it as the territory.
This produces systematic reasoning errors. In deductive reasoning tasks, people draw conclusions that are valid for their current model but invalid when alternative models are considered. The error isn't in the logic. The error is in mistaking one map for the whole territory — having one schema and treating it as if it covered all cases.
Engineering reality: when maps rot
In software engineering, this phenomenon has a technical name: architectural drift. It refers to the divergence between a system's documented architecture and its actual implementation over time. Rosik et al. (2011) conducted a longitudinal case study at IBM that tracked drift in a commercial software project and found that architectural inconsistencies accumulated steadily, even in small teams with good intentions.
The drift happens because code changes daily while documentation changes quarterly — if it changes at all. Manually maintained documentation and models tend to diverge from the actual source code over time, reducing the reliability of such artifacts (Raglianti et al., 2024). The result: teams make decisions based on diagrams that no longer describe reality. New engineers onboard against a map that has drifted from the territory. Architectural reviews evaluate a system that exists only on paper.
Three categories of drift emerge in the research:
- Absence: architectural elements exist in the code but not in the documentation
- Divergence: the documentation describes elements that don't match the implementation
- Decay: the documentation was once accurate but the system evolved away from it
Each category is a specific way that a map fails to match its territory. And each one produces a specific kind of bad decision — the kind where everyone involved believes they understand the system, because they're looking at a map that used to be correct.
This isn't limited to architecture diagrams. Every team has schemas that drift: process documentation that no longer matches how work actually flows, role descriptions that don't match what people actually do, strategic plans that no longer reflect market conditions. The territory moves. The map stays still. And the gap between them fills with assumptions.
Box's extension: all models are wrong, some are useful
George Box, the statistician, made the adjacent claim that has become equally famous: "All models are wrong, but some are useful" (Box, 1976). Where Korzybski emphasized the gap between map and territory, Box emphasized the pragmatic response to that gap.
The two ideas are complementary. Korzybski tells you why every schema fails: because abstraction necessarily discards information. Box tells you what to do about it: use the schema anyway, but never forget that it's an approximation.
This is the stance this lesson asks you to adopt — not skepticism about schemas (you'll be paralyzed without them), but what you might call schema humility. Use your models. Rely on them. Make decisions from them. But build in the habit of asking: What is my schema not capturing right now? Where has the territory shifted since I last checked?
The next lesson — L-0207, "All schemas are wrong, some are useful" — builds directly on this foundation. But before you can internalize Box's pragmatic stance, you need the prior insight: the map is not the territory. Your schema is never the thing. That gap is permanent, structural, and non-negotiable.
AI and the map-territory problem: maps training on maps
The map-territory relation takes on new urgency in the age of large language models. An LLM is trained on text — which is itself a representation of human knowledge, which is itself a representation of reality. The model is a map of a map of a map, precisely the infinite regress Bateson warned about.
Shumailov et al. (2024), in a landmark paper published in Nature, demonstrated what happens when this regress becomes recursive. When AI models are trained on data generated by other AI models, they undergo model collapse — a degenerative process where successive generations lose information about the tails of the original distribution. Rare patterns vanish. Outputs become repetitive. In one test, text about medieval architecture devolved into a list of jackrabbits by the ninth generation.
Model collapse is the map-territory problem made literal. Each generation of model produces a map. The next generation trains on that map as if it were territory. Detail disappears. Diversity shrinks. The representation eats itself.
This matters for anyone building AI-augmented thinking systems — what this curriculum calls a Third Brain. When you use an LLM to reason about your domain, you're asking a map to help you navigate territory it has never directly touched. The model's representation of software architecture, organizational dynamics, or market conditions is derived from text about those things, not from the things themselves.
The practical consequence: never treat AI output as territory. Treat it as another map — one that may surface patterns you missed, but that carries its own compressions, biases, and blind spots. The same schema humility you apply to your own mental models must be applied, with even more discipline, to the outputs of systems that produce fluent, confident text regardless of accuracy.
The protocol: practicing map-territory awareness
Schema humility is not an intellectual position. It's a practice. Here's how to build it into your daily cognitive work:
1. Label your maps. When you catch yourself reasoning from a model — a market analysis, a personality assessment, a technical architecture — explicitly name it as a schema. "My map of this system says X." The naming creates a gap between you and the representation, the same way L-0001's defusion technique creates a gap between you and a thought.
2. Inventory the omissions. For any schema you're about to act on, write down three things the real territory contains that your schema does not. This isn't optional — force yourself. The act of searching for omissions activates precisely the alternative-model generation that Johnson-Laird found people skip by default.
3. Date your maps. Every schema has a freshness date. An architecture diagram from six months ago is six months stale. A competitive analysis from last year is a historical document, not a strategic tool. Write the date on every schema you create. When the date is old, the map needs re-grounding.
4. Cross-reference maps. Different people build different maps of the same territory. Your schema of a project's status and your colleague's schema of the same project will diverge — and the divergence itself is information. Where two maps disagree, the territory is more complex than either one captured.
5. Touch the territory. Periodically bypass your schemas entirely and go look at the actual thing. Read the code instead of the diagram. Talk to the customer instead of reading the persona document. Observe the process instead of consulting the flowchart. Direct observation is expensive, which is why schemas exist. But schemas without periodic reality-checking are maps of a territory that no longer exists.
Sources
- Bateson, G. (1972). Steps to an Ecology of Mind. University of Chicago Press.
- Box, G. E. P. (1976). Science and statistics. Journal of the American Statistical Association, 71(356), 791-799.
- Johnson-Laird, P. N. (1983). Mental Models: Towards a Cognitive Science of Language, Inference, and Consciousness. Harvard University Press.
- Korzybski, A. (1933). Science and Sanity: An Introduction to Non-Aristotelian Systems and General Semantics. International Non-Aristotelian Library.
- Raglianti, R. et al. (2024). Capturing and understanding the drift between design, implementation, and documentation. Proceedings of the 32nd IEEE/ACM International Conference on Program Comprehension.
- Rosik, J., Le Gear, A., Buckley, J., Babar, M. A., & Connolly, D. (2011). Assessing architectural drift in commercial software development: A case study. Software: Practice and Experience, 41(1), 63-86.
- Shumailov, I., Shumaylov, Z., Zhao, Y., Papernot, N., Anderson, R., & Gal, Y. (2024). AI models collapse when trained on recursively generated data. Nature, 631, 755-759.