The most dangerous models are the ones you have never seen
In 1943, the Scottish psychologist Kenneth Craik published a slim, extraordinary book called The Nature of Explanation. Craik proposed that the human mind works by constructing "small-scale models" of reality — internal representations that mirror the structure of external events and allow us to anticipate what will happen next. Thinking, Craik argued, is not the abstract manipulation of symbols. It is the running of simulations on internal models that parallel the world (Craik, 1943).
Craik died in a cycling accident two years later, at twenty-one. He was thirty-one years old. His idea outlived him by almost a century and became one of the foundational concepts in cognitive science. In 1983, the psychologist Philip Johnson-Laird formalized the theory in Mental Models: Towards a Cognitive Science of Language, Inference, and Consciousness, demonstrating that human reasoning does not proceed by applying logical rules to propositions. Instead, people construct mental models of the situations they are reasoning about — structural analogs of reality, similar to architects' models or physicists' diagrams — and manipulate those models to draw conclusions (Johnson-Laird, 1983).
This is not a metaphor. You are doing it right now. You hold a model of how your career works, what causes what in your relationships, why your body feels the way it does, what your team is capable of, what the market will reward, and what kind of person you are. These models are running constantly. They filter what you notice, shape what you predict, and determine what actions seem reasonable.
And nearly all of them are invisible to you.
That is the problem this lesson addresses. Not that you have mental models — having them is unavoidable, and in most cases, beneficial. The problem is that you cannot examine what you cannot see. A mental model that lives exclusively inside your head is a model you have never inspected for gaps, never tested against alternatives, never shown to another person who might see its flaws. It governs your decisions with absolute authority and zero accountability.
The fix is not to think harder. The fix is to draw.
What happens when you make a model visible
The cognitive scientist Nancy Nersessian spent decades studying how scientists actually reason — not the sanitized version in textbooks, but the messy, iterative, model-building process that produces real discoveries. Her research, culminating in Creating Scientific Concepts (2008), documented three primary forms of model-based reasoning: analogical modeling, visual modeling, and thought experimenting. In every case, the breakthrough came not from internal contemplation but from externalizing the model — sketching it, building it, diagramming it — so that it could be manipulated, challenged, and revised (Nersessian, 2008).
External models, Nersessian found, do far more than serve as memory aids. They organize cognitive activity during reasoning. They fix attention on the salient aspects of a structure. They exhibit constraints — structural and causal — in appropriate spatial co-location. When Maxwell was working out the theory of electromagnetic fields, he did not solve the equations in his head and then draw the diagrams to communicate the result. He drew the diagrams to reason through the problem. The external representation was not the output of the thinking. It was the thinking.
This finding aligns with a broader body of research on drawing as a cognitive tool. A 2024 meta-analysis published in Memory & Cognition confirmed that drawing to learn outperforms writing to learn across multiple domains, because drawing forces the learner to represent spatial and relational structure that prose can skip over (Fernandes et al., 2024). When you write "A causes B," you can move on without specifying the mechanism. When you draw an arrow from A to B, the arrow sits there demanding explanation. And when C is missing from the diagram entirely, the blank space is visible in a way that a missing sentence never is.
This is the mechanism behind the primitive of this lesson: a mental model you cannot draw is a mental model you cannot examine. Drawing does not merely record a model. Drawing forces the model to become explicit — to declare its entities, its relationships, its boundaries, and its gaps. The act of externalization is an act of inspection.
The latticework and the slip-box: two architectures for externalized models
Charlie Munger — Warren Buffett's partner at Berkshire Hathaway and one of the most rigorous applied thinkers of the twentieth century — built his intellectual practice on a single architectural principle: the latticework of mental models. Munger argued that you need approximately eighty to ninety models drawn from multiple disciplines — physics, biology, psychology, economics, mathematics, engineering — and that these models must be organized not as an isolated collection but as an interconnected lattice where each model illuminates the others (Munger, 1994).
Munger's insight was not that mental models are useful. Everyone agrees on that. His insight was that models must be explicitly identified, deliberately collected, and structurally organized relative to one another. A person who holds models implicitly — who reasons from "gut feeling" or "experience" without naming the specific model they are applying — will inevitably, as Munger put it, "torture reality so that it fits" the one or two models they happen to have loaded. The latticework is an externalization strategy. It takes the invisible library of assumptions inside your head and makes it a visible, navigable, challengeable architecture.
Niklas Luhmann pursued the same principle through a different medium. The German sociologist, who published over seventy books and four hundred scholarly articles across four decades, attributed his extraordinary productivity not to genius but to his Zettelkasten — a slip-box containing approximately ninety thousand index cards, each holding a single idea, linked to other cards through a branching numbering system. Luhmann described the slip-box not as a filing system but as a "communication partner" — an externalized network of ideas that could surprise him. "It is impossible to think without writing," Luhmann wrote. "At least it is impossible in any sophisticated or networked fashion" (Luhmann, 1981).
The Zettelkasten worked because it externalized the structure of Luhmann's thinking, not just the content. Each card represented a node. Each link between cards represented a relationship. The branching numbers encoded hierarchies and sequences. Over time, the network developed emergent properties that Luhmann himself had not planned — unexpected connections between distant ideas, clusters of thought that suggested new research directions, contradictions that demanded resolution. The slip-box did not merely store Luhmann's mental models. It revealed models he did not know he had, by making the structure of his thinking visible and navigable.
Munger's latticework and Luhmann's Zettelkasten are different implementations of the same epistemic principle: externalized models compound in ways that internal models cannot. When a model lives in your head, it can only connect to other things you happen to think of simultaneously. When a model lives on paper, on a whiteboard, or in a linked knowledge system, it can connect to anything in the network — including things you wrote months or years ago and have since forgotten. Externalization turns your mental models from a private capacity into a public infrastructure.
The diagram that reveals what you are missing
There is a specific class of externalized model that deserves its own treatment: the causal loop diagram.
Donella Meadows, the systems scientist who co-authored The Limits to Growth and later wrote Thinking in Systems: A Primer, made the case that most human reasoning failures are not failures of logic but failures of structure. People think in straight lines — A causes B causes C — when the reality they are navigating is circular. Feedback loops, not linear chains, govern the behavior of most complex systems. And feedback loops are nearly impossible to reason about correctly without externalizing them (Meadows, 2008).
A causal loop diagram makes two things visible that linear thinking hides. First, it reveals reinforcing loops — cycles where an effect amplifies its own cause, producing exponential growth or collapse. The more anxious you feel about a deadline, the more you procrastinate; the more you procrastinate, the closer the deadline gets; the closer the deadline, the more anxious you feel. This loop is obvious when drawn. It is nearly invisible when experienced, because from the inside it feels like a series of separate problems rather than a single self-amplifying structure.
Second, causal loop diagrams reveal balancing loops — cycles where an effect counteracts its own cause, producing stability or oscillation. You exercise, which increases energy, which increases activity, which increases fatigue, which decreases exercise. The system naturally regulates itself, but only if you can see the full loop. Without the diagram, you notice only that your exercise routine keeps collapsing and conclude that you lack discipline. With the diagram, you see that fatigue is a structural consequence of increased activity and can be managed with recovery protocols rather than willpower.
Meadows emphasized that getting your model "out where it can be viewed" serves a second function beyond personal insight: it allows others to challenge your assumptions and contribute their own understanding. A mental model in your head is immune to external critique because no one else can see it. A causal loop diagram on a whiteboard is available for inspection by anyone in the room. The vulnerability is the point. The models that most need examination are the ones you are most reluctant to expose.
Concept maps: externalizing what you think you understand
Joseph Novak, an education researcher at Cornell, developed concept mapping in 1972 to study how children's understanding of science changed over time. The method is deceptively simple: write key concepts in boxes and connect them with labeled arrows that specify the relationship between each pair.
The power of concept mapping, as Novak and Alberto Canas documented in their technical reports at the Florida Institute for Human and Machine Cognition, lies in what it forces the learner to do: specify the nature of each relationship. In prose, you can write "photosynthesis involves chlorophyll" and move on. In a concept map, you must label the arrow. Involves is not a relationship — it is a placeholder for a relationship you have not identified. The map forces you to commit: does chlorophyll enable photosynthesis? Catalyze it? Regulate it? Each label implies a different model of how the system works, and the act of choosing a label reveals whether you actually understand the relationship or merely know the vocabulary (Novak & Canas, 2008).
Novak found that concept mapping consistently revealed gaps in understanding that traditional assessment methods missed. A student could write a correct paragraph about cellular respiration and still produce a concept map with broken connections, missing nodes, and mislabeled relationships. The paragraph succeeded because language is forgiving — it flows around gaps and fills them with implied meaning. The map failed because spatial structure is unforgiving — a missing connection is a visible hole.
This translates directly to personal epistemology. You believe you understand how your finances work, how your team collaborates, how your relationship handles conflict. Can you draw the concept map? Can you name every entity, specify every relationship, and label every arrow with a verb that captures the actual mechanism? The places where you hesitate — where the arrow has no label, where the box has no connections — are the places where your understanding is thinner than you thought.
AI and the Third Brain: externalizing at machine scale
The integration of artificial intelligence with knowledge externalization represents a fundamental shift in what is possible when you make your mental models visible.
Knowledge graphs — structured networks of entities and relationships stored in machine-readable format — are the industrial-scale descendant of Novak's concept maps and Luhmann's Zettelkasten. When you externalize a mental model into a knowledge graph, an AI system can do things with it that neither you nor the diagram alone can do. It can traverse the graph to find connections you did not see. It can compare your model against other models in its training data and identify where yours diverges from established understanding. It can generate questions that stress-test your model's weakest links.
The practical application is immediate. Take the mental model you drew in the exercise above. Describe it to an LLM — not in prose, but structurally: "Entity A causes Entity B. Entity B enables Entity C. Entity C blocks Entity D." Then ask: "What entities am I likely missing? What feedback loops does this structure suggest? What would a systems thinker add to this diagram?" The AI will not produce a better model than yours. It does not know your situation. But it will generate hypotheses about structural features — missing nodes, unacknowledged loops, implicit assumptions — that you can then evaluate against your actual experience.
This is not AI replacing your thinking. It is AI extending your externalization. Clark and Chalmers, in their 1998 paper "The Extended Mind," argued that external objects can function as genuine components of cognitive processes when they play the same functional role as internal representations. A notebook is part of Otto's memory if he uses it the way Inga uses her biological memory. By the same logic, an AI system that helps you inspect, expand, and stress-test your externalized mental models is functioning as an extension of your metacognitive capacity — a Third Brain that augments not your knowledge but your ability to examine your own knowledge structures (Clark & Chalmers, 1998).
The key discipline is this: externalize first, then augment. If you ask an AI to "build a mental model of my career trajectory," you will get a generic framework that reflects the AI's training data, not your actual beliefs. If you draw your model first — with all its gaps and inconsistencies — and then ask the AI to analyze the structure you produced, you get targeted feedback on your specific model. The externalization must precede the augmentation. The drawing is the thinking.
The protocol: from invisible to inspectable
Here is the practice that converts this lesson from understanding into infrastructure.
Step 1: Identify a governing model. Choose a mental model that currently drives significant decisions. Not a theoretical model you learned in school. A model you are living by — about your health, your career, your relationships, your finances, your creative work. The more consequential the domain, the more valuable the externalization.
Step 2: Draw it in ten minutes. Use paper, a whiteboard, or a digital canvas. Boxes for entities. Arrows for relationships. Labels on every arrow specifying the nature of the relationship (causes, enables, blocks, depends on, amplifies, regulates). Do not edit while drawing. Get the first version out.
Step 3: Audit the diagram. Look for missing entities — factors you know matter but did not include. Look for unlabeled arrows — relationships where you used vague connectors like "affects" or "relates to" instead of specifying the mechanism. Look for absent feedback loops — places where effects circle back to causes. Look for single points of failure — entities where removing one node would collapse the entire model.
Step 4: Show it to someone. The most powerful test of an externalized model is another person's eyes. They will see gaps you cannot see because they do not share your assumptions. They will ask questions like "why is there no arrow between X and Y?" that reveal connections you take for granted. The discomfort of showing an incomplete model is the discomfort of genuine inspection.
Step 5: Revise and date it. Update the diagram based on what you learned. Date the revision. Store it where you can find it in six months. The first version of an externalized model is rarely correct. Its value is that it creates a visible starting point that can be iterated, whereas an internal model provides no starting point at all. A model drawn once and never revisited is better than a model never drawn. A model drawn, revised, and dated quarterly is infrastructure.
The bridge: from models to blockers
You have now added mental model externalization to your practice — the ability to take the implicit structures governing your decisions and render them visible, inspectable, and improvable. This capacity works at the level of structure: you externalize the architecture of your thinking.
But there is a more immediate, more urgent form of externalization that Phase 10 demands: the practice of surfacing obstacles the moment they appear. When you are stuck — when your reasoning stalls, when your progress halts, when you feel the friction of something unnamed blocking your path — the instinct is to push harder. The discipline is to stop and write down what is blocking you. Naming the blocker often suggests the solution, because the act of externalization forces you to specify what was previously a vague sense of resistance.
L-0191 addresses this practice directly. Where this lesson taught you to externalize the deep structures, L-0191 teaches you to externalize the immediate obstacles — to make blockers visible before they compound.
Sources:
- Craik, K. J. W. (1943). The Nature of Explanation. Cambridge: Cambridge University Press.
- Johnson-Laird, P. N. (1983). Mental Models: Towards a Cognitive Science of Language, Inference, and Consciousness. Cambridge, MA: Harvard University Press.
- Nersessian, N. J. (2008). Creating Scientific Concepts. Cambridge, MA: MIT Press.
- Novak, J. D., & Canas, A. J. (2008). "The Theory Underlying Concept Maps and How to Construct and Use Them." Technical Report IHMC CmapTools. Florida Institute for Human and Machine Cognition.
- Meadows, D. H. (2008). Thinking in Systems: A Primer. White River Junction, VT: Chelsea Green Publishing.
- Luhmann, N. (1981). "Kommunikation mit Zettelkasten: Ein Erfahrungsbericht." In H. Kieserling (Ed.), Universitat als Milieu. Bielefeld: Haux.
- Munger, C. T. (1994). "A Lesson on Elementary, Worldly Wisdom as It Relates to Investment Management and Business." Lecture at USC Marshall School of Business.
- Clark, A., & Chalmers, D. (1998). "The Extended Mind." Analysis, 58(1), 7-19.
- Fernandes, M. A., Wammes, J. D., & Meade, M. E. (2024). "Drawing as a Versatile Cognitive Tool." Memory & Cognition.