The wrong level of detail is worse than no detail at all
You have learned that everything can be nested (L-0262) — that any concept can contain sub-concepts and belong to a super-concept, forming hierarchies of arbitrary depth. That structural insight is necessary but not sufficient. Knowing that hierarchies exist does not tell you where to stand within one. And where you stand matters enormously, because the wrong level of abstraction does not merely provide less useful information. It actively interferes with thinking.
A doctor explaining a diagnosis to a patient does not describe the molecular cascades inside their cells. A CEO presenting quarterly results to the board does not walk through individual transactions. A cartographer drawing a map of a country does not render every blade of grass. In each case, too much detail would not just waste time — it would obscure the very thing the audience needs to understand. The details would become noise, drowning the signal.
But the failure works in the other direction too. A surgeon who thinks only in terms of "fixing the patient" without attending to the specific anatomy in front of them is dangerous. A programmer who designs only at the architecture level without considering the actual data structures will build systems that collapse under real workloads. Too abstract is as unhelpful as too detailed. Both fail to match the resolution of your thinking to the demands of your situation.
The central claim of this lesson: there is no universally correct level of abstraction. The right level is always relative to a purpose. And learning to select the level that serves your current purpose — rather than defaulting to the level that feels most comfortable — is one of the most consequential thinking skills you can develop.
The Goldilocks level: what cognitive science discovered about categories
In the mid-1970s, psychologist Eleanor Rosch and her colleagues at UC Berkeley conducted a series of experiments that revealed something striking about how humans naturally categorize the world. They found that people do not treat all levels of a category hierarchy equally. There is a privileged middle level — what Rosch called the "basic level" — that humans default to in perception, communication, and reasoning.
Consider a taxonomy: animal, then bird, then robin. "Animal" is the superordinate level — broad, abstract, encompassing enormous variety. "Robin" is the subordinate level — specific, detailed, narrow. "Bird" is the basic level — and it is where human cognition naturally rests.
Rosch and Mervis demonstrated that basic-level categories have the highest "cue validity" — they capture the most information with the least cognitive overhead. At the basic level, category members share the most common features. You can easily picture a bird. You can describe how you interact with a bird. But try to picture "animal" — the image is vague, because the category is too broad. Try to describe the difference between a robin and a song thrush — the distinction requires specialist knowledge, because you have zoomed in beyond the level where everyday features differentiate.
This is the Goldilocks principle of abstraction: not too big, not too small, but calibrated to the cognitive task. And the critical insight for this lesson is that the basic level is not fixed. It shifts depending on expertise and purpose. For an ornithologist, the basic level drops to the subordinate — they perceive and think in terms of specific species, not generic birds. For an ecologist studying food webs, the basic level rises to functional categories — predator, herbivore, decomposer. Expertise and purpose jointly determine where the Goldilocks level sits.
Rosch's research tells us something profound: the human cognitive system is not designed to operate at a single level of abstraction. It is designed to operate at whatever level maximizes useful information for the current task. When you feel that a conversation is "too abstract" or "too in the weeds," your cognitive system is telling you that the current level of abstraction does not match your current purpose. That feeling is data. Learn to trust it.
Dijkstra's layers: how computer science formalized the insight
The same principle that Rosch discovered in human cognition was independently formalized in computer science, where it became one of the foundational ideas of the discipline.
In 1968, Edsger Dijkstra designed the THE operating system as a series of layers, each one providing services to the layer above while hiding its internal complexity from everything else. At the lowest layer: hardware interrupts and processor allocation. Above that: memory management. Above that: operator communication. Each layer was a level of abstraction, and the entire system's manageability depended on the discipline of staying at the right layer for the right task.
Dijkstra later wrote that "the arrangement of various layers, corresponding to different levels of abstraction, is an attractive vehicle for program composition." But the attraction was not aesthetic — it was cognitive. A programmer working on the memory management layer does not need to think about hardware interrupts. A programmer working on the user interface does not need to think about memory allocation. Each layer provides exactly the level of detail needed for the work happening at that layer, and deliberately hides everything else.
This is the same insight as Rosch's basic level, expressed in engineering terms. The right level of abstraction is not the one that captures the most truth. It is the one that captures the most relevant truth for the task you are performing. Dijkstra's layers do not deny that hardware interrupts exist when you are writing user interface code. They simply recognize that hardware interrupts are not the level of abstraction that serves that particular purpose.
David Marr, the neuroscientist, formalized a parallel framework in 1982 for understanding information-processing systems. He proposed three levels of analysis: the computational level (what is the system trying to accomplish?), the algorithmic level (what representations and procedures does it use?), and the implementational level (what physical substrate realizes those procedures?). A complete understanding of any system requires all three levels — but any particular question about the system is best answered at one of them. Asking why the visual system detects edges is a computational question. Asking how it detects them is an algorithmic question. Asking which neurons fire during edge detection is an implementational question. Same system. Different purpose. Different level.
Maps and the discipline of deliberate omission
Cartography offers perhaps the most intuitive demonstration of purpose-driven abstraction, because maps are literally drawn at different scales for different purposes — and the process of changing scale requires deliberate decisions about what to include and what to leave out.
Cartographic generalization is the technical term for what happens when you derive a smaller-scale map from larger-scale data. When you zoom out from a city street map to a regional map, you do not simply shrink everything. You actively remove detail: individual buildings disappear, minor roads merge or vanish, neighborhood boundaries dissolve into broader district labels. This is not a loss of information. It is a transformation of information — from the level that serves a pedestrian navigating streets to the level that serves a driver planning a route across a state.
The critical principle in cartographic generalization is that the purpose of the map determines what survives the transformation. A topographic map and a political map of the same territory at the same scale will show different features, because they serve different purposes. The topographic map shows elevation contours and vegetation — information relevant to understanding terrain. The political map shows borders, capitals, and administrative divisions — information relevant to understanding governance. Neither map is more "accurate" than the other. Both are abstractions. Both omit enormous amounts of real information. And both are useful precisely because of what they omit.
This is the discipline that most people lack when they think at different levels of abstraction: the willingness to deliberately omit. When you summarize a complex situation for an executive, you are drawing a smaller-scale map. The temptation is to include every caveat, every exception, every nuance — to "be accurate." But a street-level map of an entire country is not accurate. It is unusable. Accuracy at the wrong scale is a form of noise.
The cartographer's question — "What does my audience need this map to do?" — is the exact question you should ask every time you choose a level of abstraction. And the answer always involves letting go of detail that is true but not currently useful.
Information scent: why people abandon the wrong level
Peter Pirolli and Stuart Card at PARC developed information foraging theory in the 1990s by observing that humans navigate information environments using strategies remarkably similar to how animals forage for food. The central concept is "information scent" — the cues that tell a forager whether the current path is likely to lead to valuable information.
What makes information foraging theory relevant to abstraction levels is its explanation of what happens when information is presented at the wrong level: people leave. When a web page is too abstract — all headings and no substance — the information scent is weak, and users navigate away. When a page is too detailed — dense technical content with no orienting structure — the information scent is also weak, because the user cannot determine whether the detail is relevant to their query. The strongest information scent comes from content that matches the user's current level of need.
This maps directly to how people respond to communication at mismatched abstraction levels. When a colleague explains something at too high a level, you feel like you are not getting anything actionable. The information scent is weak — you cannot tell whether the abstract principle applies to your specific situation. When they explain at too low a level, you feel overwhelmed and cannot extract the pattern. The information scent is again weak — you cannot tell which details matter and which are incidental.
The productive middle — strong information scent — is the level where the information matches your purpose. And the skill of a good communicator, a good teacher, a good thinker, is the ability to read the purpose of their audience and calibrate accordingly.
Strategic and tactical: the levels that organizations confuse
One of the most common and costly abstraction failures happens in organizations, where strategic thinking and tactical thinking operate at different levels but are routinely confused.
Strategic thinking operates at a high level of abstraction. It deals with direction, positioning, competitive advantage, and long-term outcomes. A strategy answers the question: "Given the landscape, where should we go and why?" Tactical thinking operates at a lower level. It deals with specific actions, resource allocation, timelines, and immediate execution. A tactic answers the question: "Given where we want to go, what do we do next?"
Both are necessary. Neither is sufficient alone. And the failure that plagues most organizations is not a lack of strategy or a lack of tactics — it is the inability to match the level of abstraction to the decision at hand. A leadership team that spends its meetings discussing implementation details is doing tactical work at a strategic altitude. They have the right people in the room for high-level decisions and are wasting them on low-level ones. A project team that spends its standups debating market positioning is doing strategic work at a tactical altitude. They have the right people for execution decisions and are distracting them with directional ones.
The fix is not to value one level over the other. It is to develop the discipline of asking, at the start of every conversation: "What level of abstraction does this decision require?" and then holding that level consistently. This is harder than it sounds, because humans naturally drift toward their comfort zone. Detail-oriented people pull conversations downward. Big-picture people pull conversations upward. The level of abstraction becomes a tug-of-war between cognitive preferences rather than a deliberate choice driven by the purpose of the discussion.
AI as abstraction navigator
Large language models introduce a genuinely new capability into the abstraction-level problem: they can rapidly re-express the same information at different levels of detail. Ask an LLM to summarize a technical paper in one sentence, one paragraph, or five pages, and it will produce reasonably competent output at each level. Ask it to explain a concept "like I am an expert" versus "like I am a beginner," and it shifts the abstraction level accordingly.
This is useful, but it carries a specific risk. When AI handles the level-shifting for you, you can lose the ability to do it yourself. The cognitive skill of selecting the right abstraction level requires practice — it requires the experience of choosing wrong, noticing the mismatch, and recalibrating. If you outsource that selection to an AI system, you get the output at the right level but you do not develop the judgment that selected it.
The productive use of AI for abstraction-level work is as a mirror, not a replacement. Generate explanations at multiple levels and then examine them: which one actually serves your purpose? Where does the high-level summary lose critical nuance? Where does the detailed version include information that distracts from the point? Use the AI-generated versions as a calibration tool for your own judgment about which level fits which purpose.
There is a second productive use: AI can help you detect when you are stuck at the wrong level. If you are struggling to explain something clearly, paste your draft into an LLM and ask it to identify the level of abstraction you are operating at. Often, the problem is not unclear writing but mismatched abstraction — you are mixing strategic and tactical, or toggling between the superordinate and subordinate without settling at the basic level. Seeing the mismatch diagnosed explicitly helps you correct it.
The principle remains constant: the value of AI in abstraction-level work is proportional to the abstraction-level skill you bring to the interaction. If you can already diagnose a mismatch, AI accelerates the fix. If you cannot diagnose it, AI produces plausible-sounding output at whatever level it happens to land on, and you have no way to evaluate whether that level serves your purpose.
Protocol: purpose-driven abstraction selection
This protocol converts the selection of abstraction levels from an unconscious habit into a deliberate practice.
Step 1: Name your purpose before you choose your level. Before writing an email, starting a presentation, beginning an analysis, or entering a meeting, state in one sentence what you are trying to accomplish. "I need to diagnose why this deployment failed." "I need to convince the board to fund this initiative." "I need to help a new team member understand our architecture." The purpose statement selects the level.
Step 2: Identify the level that serves that purpose. Using Rosch's framework as a guide: Is your audience best served by the superordinate level (broad principles, categories, direction)? The basic level (functional descriptions, concrete-but-general explanations, the level where most people naturally think)? Or the subordinate level (specific details, precise mechanisms, implementation particulars)? There is no universal answer. The answer comes from matching the level to the purpose you just named.
Step 3: Hold the level consistently. Once you have selected a level, resist the pull to drift. If you are giving a strategic overview, do not get drawn into implementation details by a single question — note the question, promise to address it at the right level, and return to your altitude. If you are doing a detailed technical review, do not let the conversation float up to strategic generalities. Level discipline is a practice, not a personality trait.
Step 4: Shift levels deliberately, not reactively. When you do need to change levels — and you will — signal the shift explicitly. "Let me zoom in on that specific case for a moment." "Let me step back to the bigger picture." "That is an implementation question — let me drop down a level." Explicit transitions prevent the cognitive disorientation that comes from unmarked level changes. Your audience (or your own thinking) can follow you up and down the hierarchy if you announce where you are going.
Step 5: After the fact, audit your level choice. Did the conversation accomplish its purpose? If not, ask whether the abstraction level was wrong. Many "communication failures" are actually abstraction-level mismatches. The content was correct. The level was not.
The bridge to deliberate navigation
You now know that hierarchies contain multiple levels (L-0261), that anything can be nested within them (L-0262), and that the right level to operate at is determined by your purpose, not by some inherent property of the hierarchy itself. This is a foundational shift: the hierarchy is not a fixed structure you observe from one position. It is a navigational space you move through, and your purpose is the compass.
But knowing that purpose selects the level is not the same as being skilled at moving between levels. The next lesson — L-0264, Drill Down and Zoom Out as Thinking Operations — takes the principle you have just learned and operationalizes it. If this lesson established that you need to be at the right floor of the building, L-0264 teaches you to use the elevator. Drilling down means deliberately increasing detail to examine mechanisms, causes, and specifics. Zooming out means deliberately decreasing detail to see patterns, contexts, and relationships. Both are active thinking operations — not passive changes of perspective, but deliberate cognitive moves you execute when your purpose shifts.
The ability to select the right level of abstraction is the foundation. The ability to move between levels fluently is the skill that makes hierarchical thinking genuinely powerful.
Sources
- Rosch, E., Mervis, C. B., Gray, W. D., Johnson, D. M., & Boyes-Braem, P. (1976). Basic objects in natural categories. Cognitive Psychology, 8(3), 382-439.
- Rosch, E. (1978). Principles of categorization. In E. Rosch & B. B. Lloyd (Eds.), Cognition and Categorization (pp. 27-48). Lawrence Erlbaum.
- Dijkstra, E. W. (1968). The structure of the "THE"-multiprogramming system. Communications of the ACM, 11(5), 341-346.
- Dijkstra, E. W. (1972). Notes on Structured Programming. In O.-J. Dahl, E. W. Dijkstra, & C. A. R. Hoare (Eds.), Structured Programming. Academic Press.
- Marr, D. (1982). Vision: A Computational Investigation into the Human Representation and Processing of Visual Information. W.H. Freeman.
- Pirolli, P., & Card, S. (1999). Information foraging. Psychological Review, 106(4), 643-675.
- Victor, B. (2011). Up and Down the Ladder of Abstraction. worrydream.com/LadderOfAbstraction.
- McMaster, R. B., & Shea, K. S. (1992). Generalization in Digital Cartography. Association of American Geographers.