You think you understand it. You don't.
Ask someone how a toilet works. They'll say something confident: you push the handle, water flows, waste goes away. Now ask them to draw the mechanism. Ask them what the float valve does. Ask them why the tank refills to a specific level and stops. Watch their confidence collapse in real time.
This is the illusion of explanatory depth, identified by Rozenblit and Keil in their landmark 2002 study at Yale. They asked participants to rate how well they understood everyday devices — zippers, toilets, cylinder locks. Ratings were high. Then they asked participants to write step-by-step explanations of how those devices actually worked. After attempting the explanation, participants re-rated their understanding — and the ratings dropped sharply. The act of decomposing their knowledge into steps forced them to confront what they did not actually know.
The effect is specific. Rozenblit and Keil found it strongest for explanatory knowledge — understanding that involves complex causal patterns. It did not appear for factual knowledge, narratives, or procedures. You know whether you can recite the capitals of Europe. You do not know whether you can explain how a supply chain works until you try to lay out every link.
This is not a quirk of mechanical devices. It is the default state of your understanding of nearly everything you think you know. And the only reliable way to surface the gaps is decomposition — breaking an idea into its constituent parts and examining each one.
The illusion operates everywhere you don't look
Consider how this plays out in knowledge work. A product manager says "we understand our users." A strategist says "our competitive advantage is clear." An engineer says "this system is straightforward." These statements feel true from the inside. The person saying them experiences genuine confidence.
But confidence is not understanding. Understanding requires that you can identify the parts, explain how they connect, and predict what happens when one part changes. If you cannot do that, you have a feeling of understanding — not the thing itself.
Feynman captured this with the statement found on his blackboard at the time of his death: "What I cannot create, I do not understand." This was not a statement about manufacturing. It was a statement about decomposition. To create something, you must know every part and every relationship between parts. If you cannot rebuild it from components, your "understanding" is a compressed summary — useful for conversation, useless for action.
During his year teaching physics in Brazil, Feynman encountered students who could recite textbook definitions perfectly but could not answer basic questions about the physical world those definitions described. They had memorized the compression without ever unpacking it. They could say the words. They could not decompose the ideas those words pointed to.
What decomposition actually does to your thinking
Decomposition is not just an organizational technique. It is a diagnostic tool for the quality of your understanding. When you break an idea into parts, three things happen simultaneously:
You discover missing pieces. Herbert Simon, in his 1962 paper "The Architecture of Complexity," argued that complex systems are nearly always organized as hierarchies — systems of subsystems, each of which can be analyzed somewhat independently. He called this property near-decomposability: interactions within a subsystem are stronger than interactions between subsystems. This is why decomposition works as a discovery method. When you break a system into subsystems, you can see the internal structure of each one. Pieces that were invisible at the higher level become obvious at the lower level.
Simon illustrated this with a parable of two watchmakers, Hora and Tempus, each assembling watches from 1,000 parts. Tempus built each watch as a single assembly — if interrupted, the entire structure fell apart and he had to start over. Hora organized his watches into stable subassemblies of about 10 parts each. When interrupted, he lost only the subassembly in progress. Hora prospered. Tempus went out of business. The lesson is not merely about efficiency. It is about what decomposition reveals: a structure of stable intermediate forms that was invisible when you looked at the whole.
You find hidden dependencies. Kruger and Evans demonstrated this in their 2004 study on the planning fallacy. When people estimated how long a complex task would take as a single unit, they consistently underestimated. When asked to unpack the task into its component steps first, their estimates became significantly more accurate. The more complex the task, the more unpacking improved accuracy. The mechanism is straightforward: the holistic estimate skips over dependencies and transitions between steps. Decomposition forces you to see them. Each sub-step has prerequisites, each transition takes time, and none of that is visible from the top.
You reveal what you assumed versus what you know. This is the core function. Every time you decompose and hit a step you cannot explain, you have found an assumption masquerading as knowledge. That moment of "wait, how does that part actually work?" is the most valuable moment in the entire process. It is the exact point where your epistemic infrastructure has a gap — and now you can see it.
The cognitive science of why wholes overwhelm you
There is a reason you default to holistic understanding instead of decomposed understanding: your working memory cannot handle the whole.
John Sweller's cognitive load theory, refined over decades of research, identifies the core problem as element interactivity — the number of elements that must be processed simultaneously for comprehension. When you think about "migrating to microservices" as a single concept, the element interactivity is low: one idea, one slot in working memory. But the actual migration involves service boundaries, data ownership, API contracts, deployment pipelines, monitoring, team reorganization, and their interactions. Each of those is an element. The interactions between them are additional elements. Trying to hold all of that in working memory simultaneously is not difficult — it is impossible.
Sweller distinguishes between intrinsic cognitive load (the inherent complexity of the material) and extraneous cognitive load (complexity added by poor presentation or organization). Decomposition does not reduce intrinsic load — the migration is genuinely complex. What it does is restructure how you encounter that complexity. Instead of facing all elements simultaneously, you face subsets — each manageable within working memory's limits. The complexity is the same. Your ability to actually reason about it is transformed.
This explains why the illusion of explanatory depth feels so real. When you think about something as a whole, the low element interactivity produces a feeling of fluency. "I get it." But fluency is not comprehension. Fluency means the cognitive load is low. Comprehension means you can account for the parts. These are different things, and your brain does not distinguish between them unless you force the decomposition.
Decomposition as a discipline: from intuition to infrastructure
Project management formalized this insight decades ago. The Work Breakdown Structure (WBS), codified in the Project Management Institute's practice standards, is the discipline of decomposing project scope into progressively smaller deliverables until every piece is estimable, assignable, and trackable. In a PMI survey, 87% of project managers reported using WBS at least half the time, with 91% expressing satisfaction with its ability to support scope definition, cost estimation, and risk planning.
But the important finding is not that professionals use it. It is what happens when they don't. PMI research directly ties scope creep, budget overruns, poor performance, and missed deliverables to the completeness and quality of the WBS. When the decomposition is shallow or missing, complexity hides. When it is thorough, complexity surfaces and becomes manageable. The WBS does not make projects simpler. It makes the actual complexity visible so you can plan for it instead of being surprised by it.
The same pattern appears in software debugging. George Polya's problem-solving heuristics — understand the problem, devise a plan, carry out the plan, look back — have been directly mapped to debugging methodology. The "understand the problem" step is decomposition: isolate the bug by narrowing the reproduction case, bisecting the codebase (as in git bisect), and identifying the smallest change that triggers the failure. Every senior engineer knows that fixing a bug you don't understand leads to two bugs. Decomposition is how you arrive at understanding before you act.
Decomposition, AI, and your Third Brain
Chain-of-thought prompting — the technique introduced by Wei et al. in their influential 2022 paper — is forced decomposition applied to artificial intelligence. Instead of asking a language model to produce an answer directly, you prompt it to reason step by step. The results were dramatic: on the GSM8K math benchmark, a 540-billion-parameter model using chain-of-thought prompting achieved state-of-the-art accuracy, far outperforming the same model answering directly.
The parallel to human cognition is not a metaphor. It is the same mechanism. When an LLM answers a complex question in one step, it compresses — just as your brain does when you think about "microservices migration" as a single concept. When forced to decompose into intermediate reasoning steps, it surfaces the hidden complexity of each step and catches errors that the compressed version misses.
This has a direct implication for how you work with AI as a thinking partner. If you hand an AI a compressed, holistic prompt — "help me fix my team's productivity" — you get a compressed, holistic answer. If you first decompose your understanding into parts, identify where your gaps are, and present those specific decomposed questions, the AI can operate on each piece with precision. Your decomposition is the prerequisite for AI's usefulness. The machine does not know where your understanding breaks down. Only you do — and only if you decompose first.
The most powerful workflow is recursive: you decompose, the AI decomposes further, you examine what surfaces, you decompose again. Each pass reveals another layer of hidden complexity that neither you nor the AI could see from the top.
The practice: decompose until you find uncertainty
The point of decomposition is not to produce a neat outline. It is to find the question marks — the specific points where your understanding fails. A decomposition with no question marks is either shallow or dishonest. Real ideas, real systems, and real decisions always contain components you do not fully understand. The goal is to know exactly which components those are.
This changes your relationship to complexity permanently. Before you practice decomposition, complexity feels like a wall — an undifferentiated mass that overwhelms. After you practice it, complexity becomes a map — a structured set of knowns, unknowns, and dependencies that you can navigate piece by piece.
You do not need to understand everything. You need to know what you do not understand. Decomposition is how you find out.
The natural next question is: how far down do you go? Decomposition without a stopping rule fragments everything into dust. In the next lesson, you will learn to find the smallest useful unit — the level of granularity where each piece is independently meaningful, actionable, and worth tracking on its own.