The sentence that contains a minefield
"We should hire more senior engineers."
It sounds like one idea. It feels like one idea. In a meeting, it lands as a single proposition that people either agree or disagree with. But embedded inside that sentence are at least five unstated assumptions: (1) our current problems are caused by insufficient seniority, not process or tooling; (2) senior engineers are available in our market at a price we can pay; (3) our organization can effectively onboard and retain senior talent; (4) more seniority will not create new coordination costs that offset the gains; (5) the timeline for hiring will match the timeline of the problem we are trying to solve.
If assumption three is wrong — if your onboarding is broken and senior hires churn within six months — then "hire more senior engineers" doesn't just underperform. It actively damages your team's morale, budget, and credibility. But you'll never diagnose the failure correctly, because you never separated the compound idea into its component assumptions. You'll conclude that "hiring didn't work," when the truth is that one specific hidden dependency failed while the others may have been perfectly sound.
This is the core problem with compound ideas: they don't just bundle multiple claims together. They make it impossible to learn from failure, because when the whole thing collapses, you can't tell which load-bearing assumption gave way.
Aristotle already knew this
In the fourth century BCE, Aristotle identified a pattern he called the enthymeme — a syllogism with a missing premise. Where formal logic requires every step to be stated explicitly (All humans are mortal; Socrates is a human; therefore Socrates is mortal), everyday reasoning routinely drops a premise and lets the audience fill it in unconsciously. "Socrates is a human, so he's mortal" skips the universal claim — and nobody notices, because the missing piece feels obvious.
Aristotle considered the enthymeme "the body of proof" in rhetoric, precisely because unstated premises are more persuasive than stated ones. When you supply the missing assumption yourself, you feel like you arrived at the conclusion through your own reasoning. You don't examine the gap because you don't experience it as a gap.
Two millennia later, philosopher Stephen Toulmin formalized this insight into a model of argumentation that makes the hidden structure explicit. In Toulmin's framework, every argument has three core elements: a claim (what you assert), grounds (the evidence you offer), and a warrant (the assumption that connects the evidence to the claim). The warrant is the hidden dependency. In most arguments, it goes unstated. "Sales are down 20% [grounds], so we need to restructure the sales team [claim]" — the warrant, never spoken aloud, is that the sales team's structure is the cause of the decline rather than market conditions, product quality, pricing, or a dozen other possibilities. Challenge the warrant and the entire argument changes shape. But you can only challenge what you can see.
The software engineering parallel
Software engineers have a precise vocabulary for this problem. They call it coupling — the degree to which one module depends on the internal workings of another. In a tightly coupled system, changing one component forces changes in others, often in ways the original developer never anticipated. The dependencies are real but invisible until something breaks.
The consequences are well documented. Tightly coupled systems exhibit what engineers call a "ripple effect": a change in one module propagates unpredictably through the system, because the actual impact radius is broader than it looks. When your architecture silently depends on shared references remaining aligned, you carry invisible dependencies across modules. These dependencies slow your ability to scale and isolate changes. Every modification becomes a gamble, because you cannot accurately predict what else will be affected.
The parallel to compound ideas in your thinking is exact. When you hold "we should pivot to enterprise sales" as a single belief, it is a tightly coupled monolith. It depends on your assumptions about enterprise buyer behavior, your team's ability to handle longer sales cycles, your product's readiness for enterprise compliance requirements, and your runway lasting long enough to survive the transition. Change any one of those assumptions and the entire belief should change — but if they're coupled into a single compound idea, you'll resist revising the whole thing even when one component has clearly failed. The coupling makes the system fragile.
The solution in software engineering is the same as the solution in thinking: decompose into loosely coupled modules with explicit interfaces. Each assumption becomes its own unit, testable independently, replaceable without destroying the whole.
Attribute substitution: the mechanism of hiding
Daniel Kahneman and Shane Frederick formalized one of the key mechanisms by which compound ideas conceal their dependencies. In their 2002 paper "Representativeness Revisited," they described attribute substitution: when faced with a difficult question (the "target attribute"), your mind unconsciously replaces it with an easier question (the "heuristic attribute") and answers that instead — without you noticing the swap happened.
The classic example: "How happy are you with your life?" is a genuinely difficult question requiring you to weigh multiple domains — career, relationships, health, purpose, financial security. Your mind substitutes: "What's my mood right now?" The answer to the easy question becomes the answer to the hard one. The compound idea ("my life satisfaction") gets collapsed into a single, readily available signal ("my current emotional state"), and all the hidden dependencies — the career that's actually going well, the relationship that needs attention, the health issue you've been ignoring — disappear from the evaluation.
This is not a failure of intelligence. Kahneman's research shows it is a fundamental feature of how System 1 (fast, automatic, associative) processes generate judgments. The substitution happens below the threshold of awareness. You experience the answer as a response to the question you were asked, not to the simpler question your mind actually answered. The compound idea's dependencies are not merely hidden — they are actively replaced by a single proxy that feels complete.
Discovery-Driven Planning: decomposition as strategy
Rita McGrath and Ian MacMillan, writing in the Harvard Business Review in 1995, built an entire strategic planning methodology around one insight: business plans fail because compound ideas ("this venture will succeed") hide multiple testable assumptions that never get tested.
Their framework, Discovery-Driven Planning, inverts the conventional approach. Instead of building a plan and then executing it, you start by listing every assumption the plan depends on, rank them by importance and uncertainty, and then design checkpoints that test the most critical assumptions first — before you've committed significant resources.
The key tool is what McGrath and MacMillan call the "reverse income statement": start with the financial outcome you need, work backward to the revenue and cost assumptions required to produce it, and then ask which of those assumptions you've actually validated versus which you're simply hoping are true. The difference between "planning" and "discovery-driven planning" is the difference between treating a compound idea as a single bet and treating it as a portfolio of independent hypotheses.
This is assumption mapping applied to strategy. And the reason it works is the same reason decomposition works in any domain: compound ideas that stay compound cannot be tested, debugged, or improved. They can only succeed or fail as monoliths — and when they fail, they teach you nothing about why.
Technical debt: when hidden dependencies compound over time
Ward Cunningham coined the term "technical debt" in 1992 to describe what happens when compound decisions accumulate without decomposition. His original metaphor was precise: "Shipping first time code is like going into debt. A little debt speeds development so long as it is paid back promptly with a rewrite. The danger occurs when the debt is not repaid. Every minute spent on not-quite-right code counts as interest on that debt."
The mechanism is directly analogous to compound ideas in your thinking. Each unexamined compound decision adds a small amount of hidden dependency to the system. Individually, each one is manageable. But they compound. The interest accumulates. Over time, the system becomes increasingly fragile — not because any single decision was catastrophic, but because the hidden dependencies between dozens of compound decisions create interaction effects that no one can predict or trace.
This is exactly what happens in your epistemic infrastructure when you carry undecomposed beliefs. Each compound idea you hold without examination adds hidden dependencies to your worldview. "I'm good at my job" might bundle together "I produce high-quality work," "my colleagues respect me," "my skills are current," and "my organization values what I do." If the fourth assumption erodes — the organization shifts priorities — the compound idea shatters, and you experience it as a crisis rather than as useful information about one specific dependency that changed.
The accumulated cost of unexamined compound ideas is epistemic debt. And like technical debt, the interest comes due at the worst possible time.
AI as assumption unpacker
Large language models offer a genuinely new tool for decomposing compound ideas — but only if you understand both their capability and their limitation.
The capability is real. When you present a compound statement to an LLM and ask "what assumptions does this depend on?", the model can surface unstated premises with remarkable breadth. It draws on patterns across vast training corpora to identify the kinds of assumptions that typically hide inside strategic statements, technical proposals, or personal beliefs. Chain-of-thought prompting — asking the model to reason step by step — was shown by Wei et al. (2022) to substantially improve reasoning performance precisely because it forces the model to make intermediate steps explicit rather than jumping from premise to conclusion.
But recent research reveals an important limitation. A 2024 study on Model-First Reasoning found that standard chain-of-thought approaches still frequently introduce "unstated actions or inferred missing information" — the model itself generates enthymemes, skipping assumptions rather than surfacing them. The researchers proposed that hallucinations in LLMs are not merely false statements but "a symptom of reasoning performed without a clearly defined model of the problem space." In other words, the AI has the same vulnerability as human reasoning: when the structure of the problem is implicit rather than explicit, hidden dependencies slip through.
The practical implication: AI is most powerful as an assumption unpacker when you provide the structure. Write your compound idea explicitly. Ask the model to decompose it. Then critically evaluate whether the decomposition is complete — because the model may have introduced its own hidden assumptions in the process. The tool amplifies your decomposition ability, but it does not replace the discipline of looking for what is missing.
From compound to atomic
In L-0024, you learned to find the smallest useful unit — the level of decomposition where each piece is independently meaningful. This lesson explains why that decomposition matters: because compound ideas hide dependencies, and hidden dependencies make your thinking fragile, untestable, and resistant to learning.
The compound idea is not wrong. "We should expand into Europe" might be exactly right. But you cannot know that — and more importantly, you cannot learn from partial failure — until you separate the idea into its component assumptions and evaluate each one independently.
Every belief you hold is a candidate for this decomposition. Not because you should be paralyzed by analysis, but because the five minutes you spend mapping dependencies now will save you months of acting on a monolithic assumption that hides a single fatal flaw.
In L-0026, you'll see what becomes possible once your ideas are atomic: recombination. When each assumption stands on its own, you can rearrange, substitute, and compose ideas in ways that compound monoliths never permit. The decomposition is not the end — it is the precondition for building something better.
But first, you have to see the dependencies. And seeing them requires one discipline: refusing to let a single sentence do the work of five.