Your reasoning is not what you think it is
Richard Feynman kept notebooks throughout his career — not as records of his thinking, but as the thinking itself. When historian Charles Weiner examined the notebooks and referred to them as "a record of the day-to-day work," Feynman corrected him immediately: "They aren't a record of my thinking process. They are my thinking process. I actually did the work on the paper" (Gleick, 1992). This was not humility. It was a precise observation about how cognition works. The reasoning that felt continuous and complete inside Feynman's head was not yet reasoning until it existed on paper — where it could be inspected, challenged, and corrected.
Most people treat writing as a downstream activity. You think, then you write. You reason, then you record. The assumption is that the reasoning happens internally, and the writing merely captures it. This assumption is wrong, and the research across cognitive science, education, and artificial intelligence converges on the same conclusion: externalizing your reasoning chain is not documentation — it is construction. The chain you think you have inside your head is incomplete, discontinuous, and riddled with gaps you cannot see precisely because they are inside your head. Writing the chain down does not copy it. It builds it for the first time in a form where the gaps become visible.
L-0182 established that you should externalize decisions, not just information. This lesson sharpens the focus further. It is not enough to write down what you decided and why. You must write down the step-by-step reasoning chain — each premise, each inference, each connection between one claim and the next. The decision is the conclusion. The reasoning chain is the structure that supports it. And that structure, examined link by link, is where the failures hide.
The self-explanation effect: why explaining reveals gaps
In 1989, Michelene Chi and her colleagues at the University of Pittsburgh conducted a study that illuminated why externalized reasoning outperforms internal reasoning. They observed students studying worked-out physics problems — problems where the solution was already provided — and tracked what separated successful learners from unsuccessful ones. The difference was not intelligence, study time, or prior knowledge. It was self-explanation. The students who paused after each step and explained to themselves why that step followed from the previous one learned significantly more than those who simply read through the solutions. The successful students generated what Chi called "principle-based explanations" — they articulated the reasoning connecting each step rather than accepting the steps as given (Chi, Bassok, Lewis, Reimann, & Glaser, 1989).
Chi's 1994 follow-up study with eighth-grade students learning the circulatory system made the mechanism even clearer. Students prompted to generate self-explanations after each sentence of a text achieved dramatically better understanding than unprompted students. The high explainers — those who generated the most self-explanations — all achieved the correct mental model of the circulatory system. Many of the low explainers did not, even though they read the identical text. The act of explaining, step by step, forced learners to confront gaps in their understanding that passive reading concealed. For students with existing knowledge, self-explanation allowed them to repair flawed mental models. For students with less knowledge, self-explanation generated inferences that filled gaps they did not know existed (Chi, de Leeuw, Chiu, & LaVancher, 1994).
The self-explanation effect is not an educational trick. It is a cognitive mechanism. When you externalize a reasoning chain — when you force yourself to articulate why step two follows from step one, and why step three follows from step two — you are performing self-explanation on your own thinking. And the research is unambiguous: this process reveals gaps that internal reasoning conceals. The reasoning that felt complete in your head becomes visibly incomplete on paper, not because writing degraded it, but because writing made its actual structure apparent for the first time.
Rubber duck debugging: the programmer's proof
Software engineers discovered the same principle through practice rather than research, and they gave it one of the most memorable names in the history of cognitive tools: rubber duck debugging.
The practice traces to Andrew Hunt and David Thomas's 1999 book The Pragmatic Programmer, which describes a programmer who carried a rubber duck and debugged his code by explaining it to the duck, line by line. The duck, obviously, contributed nothing. The programmer contributed everything — by forcing himself to articulate what each line of code was supposed to do and how it connected to the next line, he made the logical structure of his program visible to himself. The bugs appeared at the points where his explanation broke down — where he could not articulate the connection between one step and the next because the connection did not actually exist (Hunt & Thomas, 1999).
The cognitive mechanism is identical to the self-explanation effect. When you are reading your own code silently — or thinking through your own reasoning internally — your mind skips steps. It fills gaps with assumptions. It pattern-matches from familiar structures without verifying that the current structure actually matches the pattern. System 1 thinking, in Daniel Kahneman's framework, operates automatically and efficiently — and it is systematically blind to its own errors because it optimizes for fluency rather than accuracy. The moment you must explain the code aloud — or write the reasoning chain down — you shift from System 1 to System 2. You move from fluent pattern-matching to deliberate, step-by-step construction. And deliberate construction surfaces the gaps that fluent matching conceals.
Rubber duck debugging works precisely because the duck is not a programmer. The duck cannot fill in the gaps for you. The duck cannot nod along when you wave your hands at the complicated part. You must explain every step, every connection, every transition. And the transition you cannot explain is the bug — in code, and in reasoning.
This is the principle behind externalizing your reasoning chain. You are not writing for an audience. You are explaining to the duck. The document you produce is a byproduct. The real output is the gaps you discover in the process of constructing the chain link by link.
Argument mapping: making structure visible
Tim van Gelder, a philosopher and cognitive scientist at the University of Melbourne, demonstrated in 2005 that making argument structure visually explicit produces dramatic gains in critical thinking ability. His research on argument mapping — the practice of diagramming the logical structure of an argument, showing claims, reasons, objections, and the relationships between them — found that one semester of explicit argument mapping practice produced cognitive gains equivalent to an entire undergraduate degree's worth of general critical thinking improvement (van Gelder, 2005).
The reason is structural visibility. When you hold an argument in your head, the structure is implicit. You know your conclusion and you know some reasons that support it, but the relationship between those reasons — which ones depend on each other, which ones are independent, where the argument has a single point of failure — remains invisible. Argument mapping makes that structure explicit. It forces you to distinguish between a claim and the evidence for it. It forces you to identify whether your reasons are independent (any one of them is sufficient) or co-dependent (all of them are necessary). It forces you to confront the objections that exist but that your internal reasoning conveniently overlooked.
Stephen Toulmin's model of argumentation, developed in 1958, provides the canonical framework for this kind of structural analysis. Toulmin identified six components of any argument: the claim (what you assert), the grounds (the evidence supporting it), the warrant (the principle that connects the evidence to the claim), the backing (support for the warrant itself), the qualifier (the degree of certainty), and the rebuttal (the conditions under which the claim would not hold). Most internal reasoning contains only the first two — a claim and some grounds. The warrant, the principle that makes the evidence actually relevant to the conclusion, usually goes unstated. And the unstated warrant is precisely where reasoning fails most often, because it is the assumption you never examined (Toulmin, 1958).
When you externalize a reasoning chain, you are performing informal argument mapping. Each step in your chain is a claim. The connection between steps is a warrant. And the practice of writing each step forces you to make warrants explicit — to state not just what you believe, but why you believe this step leads to the next one. The warrants you cannot articulate are the weakest links in your chain. They are also the links you will never find without externalization, because inside your head, the absence of a warrant feels identical to the presence of one.
Chain of thought in AI: the mirror and the lesson
In January 2022, Jason Wei and colleagues at Google Brain published a paper that transformed how artificial intelligence systems reason. They showed that when large language models are prompted to generate a "chain of thought" — a sequence of intermediate reasoning steps rather than jumping directly to an answer — their performance on arithmetic, commonsense, and symbolic reasoning tasks improves dramatically. The technique, called chain-of-thought prompting, did not give the models new information. It forced them to externalize their reasoning, step by step, rather than producing a conclusion directly (Wei et al., 2022).
The parallel to human cognition is striking and instructive.
When a language model generates an answer without chain-of-thought prompting, it pattern-matches from its training data to produce the most likely conclusion. This works for simple problems but fails for complex ones — precisely because the intermediate reasoning steps are being compressed, skipped, or hallucinated internally. When the model is forced to generate its chain of thought explicitly, each step constrains the next step. Errors become visible within the chain. The model can catch inconsistencies between steps that would be invisible in a direct answer.
OpenAI's reasoning models — o1, o3, and their successors — have made externalized reasoning chains central to their architecture. These models "think" by generating long chains of reasoning before producing a final answer, and they improve through learning to recognize and correct their own errors within those chains. As OpenAI's research has documented, monitoring the chain of thought is far more effective at detecting reasoning failures than monitoring only the final output. The chain is where the mistakes live. The conclusion merely inherits them (OpenAI, 2024).
The lesson for human reasoning is direct: if even artificial systems that process information at superhuman speed and scale produce better reasoning when they externalize their chains, you — with your limited working memory, your cognitive biases, your motivated reasoning — certainly do too. The difference is that the AI is forced to externalize by its architecture. You must choose to externalize by practice.
And there is a deeper lesson in the comparison. AI chain-of-thought reasoning works because each externalized step creates a checkpoint — a visible claim that can be evaluated against the next step. Human externalized reasoning works the same way. Each sentence you write becomes an object you can inspect. You can ask: does this step actually follow from the previous one? What evidence supports this transition? What am I assuming here? These questions are nearly impossible to ask about internal reasoning because internal reasoning does not hold still long enough to be examined. Written reasoning holds still permanently.
The protocol: how to externalize a reasoning chain
The practice is simple in structure and demanding in execution. Here is the protocol for externalizing a reasoning chain, applicable to any decision, position, or belief you want to examine.
Step 1: State the conclusion. Write down the claim, decision, or position you hold. One sentence. Be specific — not "I think we should invest in marketing" but "I believe we should reallocate $50K from engineering hiring to paid acquisition in Q2."
Step 2: Write the first premise. What is the most foundational claim that supports your conclusion? Write it as a single, concrete statement. "Our customer acquisition cost has increased 40% over the last two quarters."
Step 3: Write the connection. Below the premise, write the warrant — the reason why this premise supports the next step in your reasoning. "This increase matters because our unit economics become negative above $85 CAC, and we are currently at $92." If you cannot write the connection, you have found a gap. Mark it and continue.
Step 4: Repeat until you reach the conclusion. Each step should connect to the next through an explicit warrant. Each warrant should be something you can defend, not something you assume. The typical chain will have four to eight steps. If yours has fewer than three, you are compressing. If it has more than ten, you may be conflating multiple arguments.
Step 5: Review for warrant quality. Go back through the chain and evaluate each warrant. Ask three questions: Is this warrant based on evidence or assumption? Have I verified this warrant or am I trusting my memory? Would someone who disagrees with my conclusion accept this warrant? Any warrant that fails all three questions is a structural weakness in your chain.
Step 6: Identify the weakest link. Every chain has one. Find the transition where the warrant is thinnest — where the connection between steps relies most heavily on assumption, feeling, or unverified belief. This is where your reasoning is most vulnerable. It is also where additional research, evidence-gathering, or perspective-seeking will have the highest return.
This protocol takes ten to twenty minutes. It does not require special tools — a blank document, a notebook, or a whiteboard will work. What it requires is honesty: the willingness to write what you actually believe at each step, rather than what sounds most defensible. Transcribing a polished argument is not externalization. Constructing the chain and discovering its structure as you write — that is the practice.
The bridge to emotional externalization
You have now established two layers of externalization practice. L-0182 taught you to externalize decisions — the what and why of your choices. This lesson taught you to externalize reasoning chains — the step-by-step logical structure that connects premises to conclusions, where each link can be inspected and each warrant can be tested.
But reasoning is only half of what drives your actions. The other half is emotion. And emotion is, if anything, even more opaque from the inside than logic. You know you feel something. You often do not know precisely what you feel, why you feel it, or how it is influencing your reasoning. Emotion operates beneath the reasoning chain, shaping which premises you select, which evidence you weight, which conclusions feel "right." It is the invisible substrate that your reasoning chain sits on — and if you do not externalize it, your reasoning chain will contain gaps you cannot diagnose because their source is emotional, not logical.
L-0184 addresses this directly: externalize your emotional state. Where this lesson made your thinking visible, the next lesson makes your feeling visible. Together, they give you the capacity to inspect the two primary streams of internal experience that determine your actions — the logical and the emotional — rather than being driven by them in the dark.
Sources:
- Chi, M. T. H., Bassok, M., Lewis, M. W., Reimann, P., & Glaser, R. (1989). "Self-Explanations: How Students Study and Use Examples in Learning to Solve Problems." Cognitive Science, 13(2), 145-182.
- Chi, M. T. H., de Leeuw, N., Chiu, M., & LaVancher, C. (1994). "Eliciting Self-Explanations Improves Understanding." Cognitive Science, 18(3), 439-477.
- van Gelder, T. (2005). "Teaching Critical Thinking: Some Lessons from Cognitive Science." College Teaching, 53(1), 41-48.
- Toulmin, S. E. (1958). The Uses of Argument. Cambridge: Cambridge University Press.
- Hunt, A., & Thomas, D. (1999). The Pragmatic Programmer: From Journeyman to Master. Boston: Addison-Wesley.
- Wei, J., Wang, X., Schuurmans, D., Bosma, M., Ichter, B., Xia, F., Chi, E., Le, Q., & Zhou, D. (2022). "Chain-of-Thought Prompting Elicits Reasoning in Large Language Models." Advances in Neural Information Processing Systems, 35.
- OpenAI. (2024). "Learning to Reason with LLMs." OpenAI Research Blog.
- Gleick, J. (1992). Genius: The Life and Science of Richard Feynman. New York: Pantheon Books.