You can't improve what you haven't named
In 2003, the SANS Institute published what would become one of the most repeated axioms in information security: "You can't protect what you don't know you have." The Center for Internet Security formalized this as CIS Control 1 — before you configure firewalls, before you write security policies, before you do anything else: inventory your assets. Every device. Every piece of software. Every data store. Because attackers will find the assets you forgot about, and those unmanaged assets become the entry points.
The same principle applies to your cognitive architecture. You are running dozens — probably hundreds — of schemas right now. Mental models about how people behave, how markets work, how learning happens, how trust is built, how conflict resolves. These schemas govern your decisions, shape your perceptions, and filter what you notice and what you ignore. And most of them have never been named, listed, or examined.
The previous lesson established quality criteria for schemas — accuracy, predictive power, simplicity, scope. But you cannot evaluate what you haven't inventoried. You cannot apply quality criteria to schemas you don't know you have. The schema inventory is the prerequisite for every form of systematic schema maintenance that follows.
What knowledge management already knows
Organizations figured this out decades ago. Knowledge audits — systematic inventories of what an organization knows, who knows it, and where it lives — became a formal discipline in the 1990s as companies realized they were losing critical expertise every time a senior employee retired. Prusak and Davenport's Working Knowledge (1998) documented how firms like Hewlett-Packard, Xerox, and British Petroleum built knowledge inventories that tracked not just documents but tacit expertise: who understood specific processes, which teams held institutional knowledge about key clients, what decision-making heuristics had been developed through experience but never written down.
The pattern is always the same. Before the audit, leadership assumes the organization "knows what it knows." After the audit, they discover critical knowledge that exists in only one person's head, redundant expertise that could be consolidated, and — most importantly — gaps where the organization thinks it has knowledge but actually has inherited assumptions no one has tested.
Annie Duke, former professional poker player and decision strategist, applies the same logic to individual decision-making. In Thinking in Bets (2018), she argues that every decision you make is a bet — and you're placing those bets using mental models you've never audited. She advocates explicitly listing the beliefs that inform your decisions so you can assign probabilities, track outcomes, and update them over time. The inventory is what makes the updating possible.
The PARA insight: not all schemas serve the same function
Tiago Forte's PARA method — Projects, Areas, Resources, Archives — provides a structural insight that applies directly to schema inventories. Not all knowledge serves the same purpose, and organizing everything into a single flat list creates a system that's too noisy to maintain.
The same is true for schemas. You run schemas at different levels of abstraction and for different purposes:
Operational schemas govern how you execute specific tasks. "Start presentations with a story, not a thesis." "Refactor before adding features." "When someone is upset, listen before solving." These are high-frequency, domain-specific, and relatively easy to identify because they surface in daily action.
Strategic schemas govern how you make larger decisions. "Bet on trends, not current states." "Optimize for optionality when uncertain." "Relationships compound; transactions don't." These are medium-frequency and harder to identify because they operate behind the decisions rather than inside them.
Identity schemas govern how you see yourself and the world. "I'm the kind of person who figures things out." "The world rewards persistence more than talent." "People generally act in their own interest." These are the most powerful and the hardest to inventory because they feel like facts rather than schemas. You don't experience "people act in their own interest" as a model you adopted — you experience it as how the world is.
Cognitive therapy has known about this layered structure since Aaron Beck identified core beliefs in the 1960s. Beck's cognitive model distinguishes between automatic thoughts (surface-level, situation-specific), intermediate beliefs (rules and assumptions), and core beliefs (deep, global, and identity-defining). Christine Padesky's work on core belief worksheets (1994) demonstrated that making these layers explicit — literally writing them in a structured inventory — is the prerequisite for changing them. You cannot challenge a core belief you haven't surfaced. And core beliefs resist surfacing precisely because they feel like reality rather than interpretation.
The inventory is not a list. It is a registry.
There is an important difference between a list and a registry. A list is a snapshot. You write it once, and it captures a moment. A registry is a living system — entries are added, updated, deprecated, and cross-referenced over time.
In machine learning engineering, this distinction is well understood. MLflow, the open-source platform developed by Databricks, includes a model registry — a centralized store for managing the lifecycle of ML models. Each model in the registry has a name, a version history, a stage (staging, production, archived), metadata about its training data and performance metrics, and a lineage showing what it was derived from.
The model registry exists because ML teams discovered the same thing organizations discovered about knowledge: you can't govern what you haven't inventoried. Without a registry, teams deploy models they can't reproduce, run models trained on outdated data, and can't answer basic questions like "which models are we running in production right now?" The registry doesn't just list models — it tracks their state, their quality, and their relationships.
Your schema inventory should function the same way. Each entry is not just a name. It's a record:
- Schema: The model, stated as a single sentence
- Domain: Where it applies (career, relationships, health, learning, money, conflict)
- Source: Where you acquired it (experience, authority, culture, reasoning, default)
- Last tested: When you last checked this schema against reality
- Confidence: How much you'd bet on it (high / medium / low / untested)
- Status: Active, dormant, deprecated, conflicted
The "source" field is especially revealing. When you trace where a schema came from, you often discover that your most confidently held models were inherited rather than constructed — absorbed from parents, culture, early career mentors, or a single formative experience that may not generalize. The personal SWOT analysis tradition (popularized by Albert Humphrey's work at Stanford Research Institute in the 1960s) makes a similar point: listing your strengths and weaknesses forces you to distinguish between capabilities you've verified through evidence and self-perceptions you've simply carried forward without testing.
Belief mapping: techniques for surfacing invisible schemas
The hardest part of building a schema inventory is that many of your most important schemas are invisible to direct inspection. They operate below conscious awareness, shaping perception before you even realize a decision is being made.
Cognitive behavioral therapy offers several techniques for surfacing them:
The downward arrow technique. Start with a surface-level thought — "I shouldn't ask for help on this project." Ask: "If that were true, what would it mean about me?" Answer: "That I can't handle things on my own." Ask again: "And if that were true, what would it mean?" Answer: "That I'm not competent." Keep going until you hit a core belief. That core belief — "I'm not competent unless I do everything myself" — is a schema that's been governing your behavior silently. It now has a name and can be inventoried.
Pattern interruption. Track decisions that produce consistently poor outcomes. When you notice a repeated failure pattern — you always overcommit, you always avoid difficult conversations, you always choose the safe option — trace it back to the schema driving it. The failure pattern is the schema's signature. "I overcommit" often traces to "saying no means letting people down," which traces to "my value comes from being useful." Each link is a schema that belongs in the inventory.
Decision journaling. Annie Duke recommends recording decisions at the time they're made — not after outcomes are known — including the beliefs and models that informed them. Over time, the journal reveals which schemas you're actually using (as opposed to which ones you think you're using) and which schemas produce reliable results versus unreliable ones.
What changes with AI-augmented inventory
A schema inventory stored in a structured, searchable system becomes dramatically more powerful when AI can traverse it.
Without AI, you review your inventory manually — scanning entries, trying to spot patterns, relying on the same context-dependent memory that made the inventory necessary in the first place. You'll notice the schemas that are top-of-mind and miss the ones that are dormant.
With AI access to your schema inventory, several new capabilities emerge:
Contradiction detection. An AI can scan your full registry and flag schemas that conflict — "optimize for speed" alongside "never ship without thorough testing." These contradictions often hide because you apply each schema in different contexts and never hold both in working memory simultaneously. AI doesn't have that context-switching problem.
Gap identification. If your inventory has 20 schemas about career decisions and zero about health decisions, that asymmetry is meaningful. It suggests you're navigating health with default schemas you've never examined — exactly the domains where unexamined schemas cause the most damage.
Source clustering. An AI can surface patterns like: "Twelve of your schemas trace back to a single mentor from your first job. Six of those are rated low-confidence." That's not a coincidence. It's an inherited worldview that deserves explicit examination.
Staleness tracking. Schemas tagged "last tested: 3 years ago" that are still governing daily decisions are the cognitive equivalent of running unpatched software in production. An AI can flag these automatically.
This is the model registry pattern applied to human cognition. The same way an ML team needs to know which models are deployed, which are stale, and which conflict with each other — you need the same visibility into your own cognitive infrastructure.
The inventory reveals the dependencies
Once you have 20 or 30 schemas listed, something becomes apparent that was invisible before: schemas don't operate in isolation. They depend on each other.
Your schema "always hire for culture fit" depends on a deeper schema about what "culture" means. Your schema "move fast and iterate" depends on a schema about whether mistakes are recoverable. Your schema "I learn best by doing" depends on a schema about whether trial-and-error is efficient or wasteful.
These dependency chains mean that updating one schema can cascade through others. If you revise your definition of "culture" from "people like us" to "people who challenge us," your hiring schema shifts. If you discover that some mistakes are not recoverable (you've been operating in a domain where they aren't), your "move fast" schema needs revision — and everything downstream of it shifts too.
This is exactly what the next lesson addresses: mapping schema dependencies so that when you update one schema, you understand the cascading effects. But you can't map dependencies between schemas you haven't inventoried. The inventory comes first. The dependency map comes second. The order is not negotiable.
Start with what's operating, not what's aspirational
One final caution. When people first attempt a schema inventory, there's a strong temptation to list schemas they want to have rather than schemas they actually use. "I believe in data-driven decisions" goes in the inventory, while "I actually go with my gut and backfill the data afterward" does not.
The inventory must reflect reality, not aspiration. An aspirational inventory is worse than no inventory at all because it creates a false sense of self-knowledge. You think you've audited your cognitive infrastructure, but you've actually built a marketing brochure.
The test is behavioral: if a schema is real, you can point to three decisions in the last month that it governed. If you can't, it's aspirational — and it belongs on a different list.
Build the registry. Name the schemas. Track their sources, their confidence levels, and their last-tested dates. The inventory is not the end of the work. It is the beginning — the prerequisite for every form of systematic schema maintenance, quality evaluation, and dependency mapping that this phase will build on.
You can't improve what you haven't named. And you can't name what you haven't inventoried.