Your best framework has a boundary you haven't mapped
You have schemas that work. Mental models you've tested, refined, and relied on for years. A framework for evaluating job candidates. A heuristic for estimating project timelines. A theory of what makes relationships work. These schemas didn't come from nowhere -- you built them through direct experience in specific domains, and they've earned your trust by producing good results.
Here's the problem: every schema you own was forged inside a particular context, and that context has properties you've stopped noticing. The market conditions, the team dynamics, the cultural norms, the feedback loops -- all of these shaped the schema as it formed. When you carry that schema into a different domain, you carry the assumptions of the original domain with it. And those assumptions don't announce themselves at the border.
This is the scope problem. Not that your schemas are wrong, but that they have boundaries -- and most people never map those boundaries until the schema fails in a context it was never designed for.
What cognitive science found about domain-bound knowledge
In 1981, Michelene Chi, Paul Feltovich, and Robert Glaser published one of the most influential studies in cognitive science: an examination of how physics experts and physics novices categorize problems. The study found that experts sorted problems by deep structural principles -- Newton's second law, conservation of energy -- while novices sorted by surface features like whether the problem mentioned an inclined plane or a pulley.
The critical insight wasn't just that experts think differently. It was that expert knowledge is organized into hierarchical structures that integrate surface features with causal relationships specific to a particular domain. A physics expert's schema for "conservation of energy problems" is extraordinarily powerful inside physics. But that same expert doesn't automatically possess an equally sophisticated schema for molecular biology, contract law, or organizational design. The depth of their knowledge structures in one domain says nothing about their competence in another.
This is domain specificity: the finding that expertise is built from knowledge structures that are tuned to the properties of a particular field. Your schema for evaluating startup pitches was calibrated by hundreds of pitches in a specific market. It encodes patterns about founder credibility, market timing, and business model viability that are real -- within that scope. Apply it to evaluating academic grant proposals, and most of those calibrations become noise.
The transfer problem: why knowledge stays where it was built
If knowledge transferred cleanly between domains, scope wouldn't matter. You would learn something in context A, and it would automatically improve your performance in context B. But a century of research says this almost never happens without deliberate effort.
Susan Barnett and Stephen Ceci published a landmark review in Psychological Bulletin in 2002 called "When and Where Do We Apply What We Learn?" After analyzing decades of transfer research, they proposed a taxonomy with nine dimensions along which transfer can vary: the content of what's transferred, the knowledge domain, the physical context, the temporal context, the functional context, and more. Their core finding was stark: instances of far transfer -- applying knowledge from one domain to a structurally different domain -- are rare. Not impossible, but rare enough that assuming transfer will happen is a design error.
The reasons are structural. To transfer a schema from domain A to domain B, you need to do three things: recognize that your existing knowledge is relevant to the new situation, recall the right elements of that knowledge accurately, and adapt those elements to the new context's constraints. Each step is a failure point. The most common failure is the first -- you simply don't recognize that the new situation is structurally similar to something you already know, because the surface features are completely different.
This explains why a chess grandmaster doesn't automatically become a better military strategist, even though both domains involve planning, anticipating opponent moves, and managing resources under uncertainty. The schemas that make someone exceptional at chess are encoded in chess-specific patterns -- board positions, opening sequences, endgame configurations. Those patterns don't port to a battlefield, because the surface features that trigger schema activation are entirely different.
Tetlock's hedgehogs: what happens when you ignore scope
Philip Tetlock's twenty-year study of expert political judgment provides the most dramatic evidence of what happens when people apply schemas beyond their scope. Between 1984 and 2003, Tetlock collected 28,000 predictions from 284 experts across political science, economics, journalism, and government. The results were devastating: the average expert's predictions were barely better than random chance -- roughly equivalent, as Tetlock famously put it, to "a dart-throwing chimpanzee."
But the study's most important finding wasn't that experts are bad at prediction. It was that a specific kind of expert was systematically worse. Tetlock borrowed Isaiah Berlin's distinction between foxes and hedgehogs. Hedgehogs are thinkers who know "one big thing" -- they have a grand theory (Marxist, libertarian, realist) that they extend confidently into every domain. Foxes know "many little things" -- they draw from multiple frameworks, are comfortable with ambiguity, and adjust their thinking when evidence conflicts with their models.
Hedgehogs performed significantly worse than foxes, especially on long-term predictions. And the mechanism is precisely the scope problem: hedgehogs took a schema that organized knowledge well in one domain and applied it universally, without adjusting for the structural differences between domains. A Marxist lens on class struggle produces genuine insight when analyzing labor movements. Applied without adaptation to predicting currency fluctuations or election outcomes in countries with different social structures, it produces confident, well-argued predictions that are systematically wrong.
The foxes did better not because they had better schemas, but because they respected the scope of each schema. They switched frameworks when the domain changed. They held multiple models simultaneously and weighted them based on which domain they were operating in. They treated scope as a feature of every model they used.
The circle of competence: a practical scope map
Warren Buffett articulated the scope problem in investment terms in his 1996 shareholder letter: "You don't have to be an expert on every company, or even many. You only have to be able to evaluate companies within your circle of competence. The size of that circle is not very important; knowing its boundaries, however, is vital."
Charlie Munger operationalized this with brutal simplicity. He described three mental boxes: "In," "Out," and "Too Hard." Everything you encounter goes into one of these boxes. "In" means you have genuine, tested understanding -- your schemas for this domain have been validated through experience and outcomes. "Out" means you know this is beyond your scope. "Too Hard" means the domain might be within reach but you haven't done the work to validate your schemas there.
The power of this framework isn't the sorting itself. It's the honesty required to use it. Buffett and Munger built Berkshire Hathaway into one of the most valuable companies on Earth partly by refusing to invest in technology companies for decades -- not because technology was a bad investment, but because their schemas for evaluating businesses were calibrated on consumer brands, insurance, and manufacturing. They knew those schemas would produce unreliable outputs in a domain with different feedback loops, different moats, and different valuation dynamics.
Most people do the opposite. A schema that produces success in one domain creates confidence, and confidence erases the awareness that the schema has a boundary. The more successful you've been, the more dangerous this becomes -- because your track record becomes evidence (in your own mind) that your schemas are universal rather than scoped.
Scope failure in engineering: when schemas break at the boundary
Software engineers encounter the scope problem in its most concrete form when working with database schemas. A database schema is a formal structure -- tables, columns, relationships, constraints -- designed to represent a specific domain. And the ways these schemas break illustrate exactly how scope limitations operate in mental schemas.
Consider an e-commerce database schema designed to track orders. It has tables for customers, products, orders, and line items. It encodes assumptions: every order has exactly one customer, every line item references exactly one product, products have fixed prices. This schema works perfectly for a standard online store.
Now try to use that same schema for a B2B wholesale platform. Suddenly an "order" might have multiple buyers from different departments. A "product" might have negotiated pricing that varies by customer. A "line item" might reference a product that doesn't exist yet because it's custom-manufactured. Every assumption baked into the original schema -- assumptions that were invisible because they were always true in the original domain -- becomes a constraint that distorts or blocks the new domain's reality.
This is exactly what happens with mental schemas. Your schema for "how to give feedback" was built in an environment with specific norms -- maybe a direct-communication engineering culture where blunt honesty is valued. Apply that schema in a context with different norms -- a Japanese subsidiary where direct negative feedback causes loss of face, or a creative team where emotional safety drives performance -- and the schema doesn't just underperform. It actively damages the outcomes you're trying to produce.
The engineers who avoid this failure don't build "universal" schemas. They build scoped schemas and document the assumptions. They ask: what does this schema assume about the domain? Under what conditions would those assumptions break? This is the practice that translates directly to epistemic infrastructure.
AI and the scope problem: domain adaptation as a mirror
Modern AI makes the scope problem visible in a way that's hard to ignore. A large language model trained primarily on general web text can produce fluent, seemingly knowledgeable responses about medicine, law, or engineering. But fluency is not competence. When researchers evaluate these models on domain-specific tasks, the cracks appear immediately.
A general-purpose model asked to interpret a legal contract may produce a confident analysis that misses the significance of a specific clause -- because legal text contains terms of art where a single word ("reasonable," "material," "notwithstanding") carries precise legal weight that the model's general schema for English doesn't capture. A model trained on medical literature performs measurably better on clinical reasoning tasks than a general model, not because it's "smarter" but because its schemas have been scoped to the domain's actual structure.
This is why domain adaptation -- fine-tuning a model on domain-specific data -- exists as a field. It's the engineering response to the same problem you face with your own mental schemas: general knowledge structures produce general (and often wrong) outputs when applied to specialized domains. The fix isn't a bigger general model. The fix is scoping the model's knowledge to the domain where it will operate, then testing it against that domain's actual requirements.
The parallel to your own thinking is direct. You don't need more schemas. You need to know which schemas are scoped to which domains, and you need to test them against the actual structure of each domain before trusting their outputs.
The scope audit protocol
Knowing that schemas have scope is necessary but not sufficient. You need a practice for discovering the scope of the schemas you rely on. Here is one.
Step 1: Name the schema. Pick a mental model you use regularly. Not an abstract one -- a specific one that drives decisions. "Strong teams need psychological safety." "First-mover advantage determines market winners." "People respond to incentives." Write it down as a single declarative statement.
Step 2: Trace its origin. Where did you build this schema? What domain were you operating in when it formed? What experiences validated it? Be specific: not "my career" but "the three startups I worked at between 2015 and 2020, all in B2B SaaS, all in the Bay Area, all with teams under 50 people."
Step 3: List the domain's properties. What was structurally true about that environment? Fast feedback loops? High trust? Homogeneous culture? Abundant capital? Measurable outcomes? These are the conditions your schema was optimized for.
Step 4: Identify the mismatch. Now consider a domain where you've applied this schema recently. What's structurally different? Slower feedback? Lower trust? Different incentive structures? Different cultural norms? Each structural difference is a place where your schema's outputs may be unreliable.
Step 5: Formulate the scope statement. Rewrite your schema with its scope made explicit. Not "strong teams need psychological safety" but "in small, high-trust knowledge-work teams with fast iteration cycles, psychological safety correlates with higher performance." The schema hasn't changed. But now you can see its edges.
This isn't a one-time exercise. Every schema you own has a scope you haven't fully mapped. The practice is ongoing -- and it compounds. Each scope audit teaches you to see boundaries faster, which means you carry schemas into new domains with more awareness and less overconfidence.
What this makes possible
When you treat scope as a fundamental property of every schema you own, several things shift:
You stop over-generalizing from success. A schema that worked brilliantly in one domain becomes a hypothesis in another -- worth testing, not worth trusting blindly.
You become a better collaborator. Most disagreements between experts aren't about who's right. They're about which schema applies -- which is a scope question. When you can say "my model works under these conditions, and yours works under those conditions," you transform arguments into boundary-mapping exercises.
You build better AI systems. If you use AI as a thinking partner, knowing the scope of your own schemas tells you where AI outputs need the most scrutiny -- precisely the domains where your schemas are weakest or most untested.
You make fewer catastrophic errors. The worst decisions aren't made from ignorance. They're made from confidently applying a well-tested schema in a domain where its assumptions don't hold. Scope awareness is the specific skill that prevents this failure mode.
This lesson connects directly to what comes next. Once you understand that every schema has scope, you can ask the productive follow-up question: what happens when two people bring differently-scoped schemas to the same problem? That's collaboration -- and it requires shared schemas, which is where we go in the next lesson.
Sources
- Chi, M. T. H., Feltovich, P. J., & Glaser, R. (1981). "Categorization and Representation of Physics Problems by Experts and Novices." Cognitive Science, 5(2), 121-152.
- Barnett, S. M., & Ceci, S. J. (2002). "When and Where Do We Apply What We Learn? A Taxonomy for Far Transfer." Psychological Bulletin, 128(4), 612-637.
- Tetlock, P. E. (2005). Expert Political Judgment: How Good Is It? How Can We Know? Princeton University Press.
- Buffett, W. (1996). "Chairman's Letter." Berkshire Hathaway Annual Report.
- Munger, C. T. (1994). "A Lesson on Elementary, Worldly Wisdom." USC Business School Lecture.
- IBM Research. "What Is a Domain-Specific LLM?" IBM Think.
- Fivetran. "11 Database Schema Mistakes to Avoid." Fivetran Engineering Blog.