You think you understand things you don't
In 2002, psychologists Leonid Rozenblit and Frank Keil at Yale asked people to rate how well they understood everyday devices — zippers, flush toilets, cylinder locks. Participants rated themselves around 5–6 on a 7-point scale. Then they were asked to write step-by-step causal explanations of how each device actually works.
Their self-assessments dropped by 1.5 to 2 full points.
The researchers called this the illusion of explanatory depth: you feel like you understand something — the sense of comprehension is genuine — until the act of explaining it forces you to confront the gaps. The illusion only breaks when externalization is forced. When you have to produce an actual explanation rather than just feel like you could produce one.
This is the core argument for externalization. Not that it helps you remember things (that was the previous lesson). But that the act of putting a thought into words transforms it — from a vague internal sense into something precise enough to inspect, challenge, and build on.
Feynman's notebooks were not a record of his thinking
When historian Charles Weiner examined Richard Feynman's notebooks and called them "a wonderful record of his day-to-day work," Feynman corrected him:
"They aren't a record of my thinking process. They are my thinking process. I actually did the work on the paper. No, it's not a record, not really. It's working. You have to work on paper and this is the paper."
During graduate school at Princeton, Feynman created a notebook titled "NOTEBOOK OF THINGS I DON'T KNOW ABOUT." He spent weeks disassembling each branch of physics, oiling the parts, and putting them back together — "looking all the while for the raw edges and inconsistencies." The notebook wasn't a study aid. It was the mechanism by which he discovered what he actually understood versus what he only thought he understood.
The notebook forced externalization. And externalization forced precision. And precision revealed gaps that were invisible from the inside.
Why your head lies to you
When a thought lives in your head, it benefits from what you might call cognitive inflation. It feels clear. It feels complete. It feels coherent. Your brain is an unreliable narrator — it fills gaps with assumptions, smooths contradictions with rationalizations, and presents the result as a unified whole.
The moment you try to write the thought down, the inflation collapses:
- The "clear idea" has a gap you didn't notice
- The "complete plan" is missing two critical steps
- The "coherent argument" contradicts itself in paragraph three
This isn't a failure of writing. It's a success. The writing didn't create the gaps — it revealed them. They were always there. You just couldn't see them from the inside.
Michelene Chi's research on self-explanation (1989, 1994) demonstrated this directly: students who generated explanations while studying achieved dramatically higher problem-solving scores. The mechanism — self-explanation forces learners to identify and fill knowledge gaps and to construct and repair mental models. The benefit comes from the generation, not from receiving the explanation.
Psychologists call this the generation effect (Slamecka & Graf, 1978): information you produce yourself is encoded up to 40% more effectively than information you passively receive. When you write a thought down, the resulting object is not a copy of the thought. It's an upgraded version — clearer, more specific, more available for future use.
Cognition is not in the head
In 1995, cognitive scientist Edwin Hutchins published Cognition in the Wild, a study of how U.S. Navy ship navigation actually works. His finding: no single person "navigates." The navigation is a computation distributed across tools, artifacts, and people — the alidade captures a bearing, a phone talker relays it, a plotter converts it to a line on a chart. The chart itself embodies centuries of accumulated knowledge. Intelligence, Hutchins argued, is "more in the use of physical and symbolic tools, social interactions, and cultural practices than in formal abstract operations in the head of any individual."
Andy Clark and David Chalmers formalized this in their 1998 paper "The Extended Mind," asking: where does the mind stop and the rest of the world begin? Their Parity Principle: if a process in the world plays the same functional role that an internal cognitive process would play, then it is a cognitive process. Your notebook is not secondary to your thinking. It is part of the cognitive system that does the thinking.
This reframes externalization entirely. You're not "getting thoughts out of your head." You're extending your cognitive system to include artifacts that can hold more, last longer, and be inspected — things your biological working memory cannot do.
| Property | Internal (in-head) | External (written) | | ------------- | ------------------------ | --------------------- | | Capacity | ~4 items | Unlimited | | Duration | Seconds | Permanent | | Precision | Feels clear, often isn't | Forced to be specific | | Shareability | Zero | Immediate | | Revisability | No | Yes | | Debuggability | No | Yes |
The rubber duck is proof
Software developers have known this for decades. "Rubber duck debugging" — explaining your code to a rubber duck on your desk — resolves bugs with surprising frequency. The duck contributes nothing. All the cognitive work happens through articulation: translating implicit, internal understanding into explicit, external language.
The mechanism is cognitive offloading. When you explain something — even to an inanimate object — you recruit different neural resources than when you think silently. You're forced to be precise in ways that internal monologue allows you to skip. The gaps and errors that were invisible from the inside become obvious the moment you try to make them visible to something outside yourself.
Roscoe and Chi (2007, 2008) added an important nuance: the deepest learning happens when explanation forces genuine knowledge-building — inferring, elaborating, and monitoring comprehension — rather than just restating what you already know. The transformation happens in the effort to make something clear, not in the recitation.
Progressive refinement: externalization compounds
Tiago Forte's concept of progressive summarization shows what happens when externalization becomes iterative. Each pass through a captured thought is an act of transformation, not compression:
- Capture — select what resonates from the infinite stream
- Bold — identify the core of each passage (forces judgment about what matters)
- Highlight — select the "best of the best" (demands hierarchy)
- Executive Summary — restate in your own words (forces genuine understanding)
- Remix — recreate in a new form (produces new insight)
Each layer demands more cognitive engagement. Each layer produces a different, sharper thought-object. The note that emerges from Layer 4 is not the same thought you started with. It has been refined by the acts of externalization themselves.
Sönke Ahrens' Zettelkasten method makes the same point structurally: fleeting notes (raw capture) become literature notes (your own words, with source reference) which become permanent notes (fully developed, densely linked, written as if for someone else). The form of each note type demands a different level of cognitive engagement. And it is the engagement, not the filing, that builds understanding.
What changes when externalization becomes interactive
Every form of externalization up to now has been one-directional: you write, paper holds, you read later. But externalization into an AI system introduces a feedback loop.
A 2024 study published in ACM CSCW (Wan et al.) studied human-AI collaborative writing and found participants described the experience as "like having a second mind in parallel that processed all the context and provided new ideas when requested." Unlike previous forms of externalization, AI doesn't just hold your thought — it reflects it back with connections, challenges, and reformulations you didn't see.
The progression:
- Paper — externalizing to a static medium (stabilizes the thought)
- Second brain — externalizing to an organized system (enables retrieval and progressive refinement)
- AI partner — externalizing to a system that responds, challenges, and connects
Each level demands better externalization from you. Paper accepts anything. An organized system needs some structure. An AI partner needs your thoughts expressed clearly enough for a machine to reason with them. The demand for precision increases — and so does the cognitive payoff.
Research on "context sovereignty for AI-supported learning" frames this as building external scaffolding that preserves "not just what learners know, but how they know it." When you externalize clearly enough for AI to engage with your reasoning, you are performing the highest-fidelity form of thinking-through-writing yet available.
The protocol
This isn't journaling. It's not note-taking as a school habit. It's a specific protocol for using externalization as a thinking tool:
- When stuck, write. Don't think harder. Write what you know and what you don't know. The confusion will often resolve itself as you force vague intuitions into specific words.
- When deciding, write. List options, consequences, and fears. The decision often becomes obvious once visible.
- When disagreeing, draw. Put both mental models on a surface. Point at the difference.
- When reviewing, refine. Each pass through your externalized thoughts produces a sharper version — progressive summarization in action.
The common thread: externalize before the conclusion, not after. Externalization after a decision is documentation. Externalization during a decision is thinking.
Luhmann said it most clearly: "One cannot think without writing." Not "writing helps you remember your thoughts." Writing is where the cognitive work happens. The externalized object is the thinking.