Nobody on the ship knows where the ship is
In 1995, cognitive scientist Edwin Hutchins published Cognition in the Wild, a study of how navigation teams on US Navy vessels bring ships into harbor. His central finding upended a basic assumption about intelligence: no single person on the bridge holds a complete representation of the ship's position. The knowledge is distributed across people, tools, and procedures — and the team's cognitive output exceeds what any individual could produce alone.
Here is how a fix cycle works. Every three minutes, pelorus operators stationed on opposite sides of the ship sight landmarks through optical instruments and call out compass bearings. A bearing recorder logs the numbers. A plotter transfers those bearings onto a nautical chart, where the intersecting lines reveal the ship's position. The officer of the deck integrates that position with knowledge of currents, traffic, and the harbor approach plan. Each person holds a fragment. The shared schema — the agreed-upon procedure for how bearings become positions become decisions — is what turns those fragments into navigation.
Hutchins called this distributed cognition: the idea that cognitive processes do not live exclusively inside individual heads but are spread across people, artifacts, and environments. The chart is not just a recording device; it is part of the cognitive system. The bearing-recording form is not just paperwork; it propagates representational state from one person to the next. The shared schema is the infrastructure that makes all of it cohere.
This is not a metaphor. It is a literal description of how collaboration works at every scale — from a Navy bridge to a product team to a research lab to two people planning a dinner party. Teams that share mental models coordinate better than teams that do not. And teams that assume they share mental models without ever making those models explicit are the ones that run aground.
What shared mental models actually predict
The intuition that shared understanding helps teams is ancient. The research that proves it — and specifies exactly how — is more recent.
In 1993, Janis Cannon-Bowers, Eduardo Salas, and Sharolyn Converse published the foundational framework for shared mental models in teams. They argued that effective team performance requires members to hold overlapping cognitive representations across four domains: the equipment model (how the tools work), the task model (what the procedures are), the team interaction model (who communicates with whom, and when), and the team model (who knows what, and who is good at what). When these four models overlap sufficiently across team members, coordination becomes implicit — people anticipate each other's needs without being told.
A meta-analysis by Leslie DeChurch and Jessica Mesmer-Magnus (2010) examined 23 independent empirical studies and confirmed the relationship: greater convergence on both task-based and team-based mental models predicted better team processes and better team performance. The critical nuance was that only measurement methods capturing the structure of shared knowledge — not just whether people agreed on facts — predicted outcomes. It is not enough to know the same things. You must organize that knowledge in compatible ways.
This explains a phenomenon every experienced collaborator has seen: two people with identical information who still cannot coordinate. They know the same facts, but their schemas for how those facts relate — what causes what, what matters most, what to do first — are different. The information is shared. The schema is not. And it is the schema that drives behavior.
The knowledge creation spiral
If shared schemas are so powerful, how do they form? Ikujiro Nonaka and Hirotaka Takeuchi provided the most influential answer in The Knowledge-Creating Company (1995). Their SECI model describes four modes of knowledge conversion that spiral between individuals and groups, between tacit knowledge (what you know but cannot easily articulate) and explicit knowledge (what can be written down and transmitted).
Socialization is tacit-to-tacit transfer. You watch a senior engineer debug a production incident. You absorb their pattern of reasoning — what they check first, what they ignore, how they narrow the search space — not through documentation but through shared experience. This is how apprenticeship works. You acquire a schema you cannot yet name.
Externalization is tacit-to-explicit conversion. The senior engineer writes a post-mortem template, articulating the debugging schema they use: "First, check whether the error is new or recurring. Then, isolate the deployment. Then, reproduce in staging." What lived in one person's intuition now exists as an explicit, transferable representation.
Combination is explicit-to-explicit synthesis. The debugging template is combined with the team's incident severity matrix and the on-call runbook. Individual explicit schemas merge into a larger, integrated body of organizational knowledge.
Internalization is explicit-to-tacit absorption. A new team member reads the runbook, follows it during their first on-call shift, and gradually internalizes the schema until they no longer need the document. It has become part of how they think.
The power of the SECI model is that it describes schema-sharing not as a one-time event but as a continuous spiral. Each cycle through the four modes deepens the shared schema and expands it to more people. Organizations that support all four modes — that create space for apprenticeship, documentation, synthesis, and practice — build richer shared schemas faster than organizations that rely on only one or two modes.
Most teams over-index on combination (merging documents, aggregating information) and under-invest in socialization (shared experience) and externalization (articulating tacit knowledge). This is why teams with excellent documentation sometimes still struggle to coordinate: they have combined explicit knowledge but never externalized the tacit schemas that actually drive decision-making.
Your organization's schema is your architecture
In 1968, Melvin Conway submitted a paper titled "How Do Committees Invent?" to the Harvard Business Review. They rejected it — he had not, they said, proved his thesis. He published it in Datamation instead. Fred Brooks later cited it in The Mythical Man-Month and gave it the name that stuck: Conway's Law.
The law states: "Organizations which design systems are constrained to produce designs which are copies of the communication structures of those organizations."
This is schema theory applied to organizational design, whether Conway knew it or not. The communication structure of an organization is its shared schema — the implicit model of who talks to whom, who owns what, and how decisions flow. And that schema shapes the technical systems the organization produces. A company with four isolated teams will produce a system with four isolated modules. A company where everyone talks to everyone will produce a monolith. The schema propagates into the artifact.
The practical consequence, recognized by software architects like Martin Fowler, is the Inverse Conway Maneuver: if you want a particular system architecture, you must first build the organizational schema that would produce it. Want autonomous microservices? Build autonomous teams. Want tight integration? Build tight communication channels. The shared schema comes first. The architecture follows.
Eric Evans made this same insight operational in Domain-Driven Design (2003). His concept of ubiquitous language is, at its core, a shared schema between developers and domain experts. When the product team says "order" and the engineering team says "order," do they mean the same thing? In most organizations, they do not. The product team means a customer's intent to purchase. The engineering team means a row in the database. The word is shared. The schema behind it is not.
Evans argued that the ubiquitous language must be rigorously maintained: every term used in conversation, in documentation, and in source code must mean the same thing to everyone involved. The names of classes, methods, and database tables should reflect the shared schema, not the engineering team's private jargon. When the language fractures, the schemas diverge, and the system begins to reflect that divergence — in bugs, in missed requirements, in features that solve the wrong problem.
How misaligned schemas silently destroy collaboration
Schema misalignment is uniquely dangerous because it is invisible. When two people lack the same information, the gap is usually obvious — someone asks a question and the other person cannot answer it. But when two people hold different schemas, they can talk fluently, agree on a plan, and walk away with completely different expectations for what happens next.
Consider the word "done." In a software team, "done" might mean: the code compiles, the tests pass, the code is reviewed, the feature is deployed to staging, the feature is deployed to production, or the feature has been validated by a user. Each of these is a different schema for completion. A team that has not explicitly aligned on what "done" means will generate a steady stream of low-grade conflict: tasks that are "done" but not deployed, features that are "deployed" but not validated, work that loops back endlessly because different people are applying different schemas to the same word.
This is not a communication problem in the ordinary sense. The people involved are communicating clearly. They are using the same words. They may even like and respect each other. The problem is structural: their schemas for how the work operates do not overlap sufficiently for implicit coordination to function. Every handoff becomes a potential failure point because the receiver's model of "what I'm getting" does not match the sender's model of "what I'm giving."
Cannon-Bowers, Salas, and Converse specifically identified this as the mechanism by which shared mental models improve performance: when team members hold overlapping models, they can predict each other's needs and actions. They do not need to communicate every decision because they already know — through their shared schema — what the other person is likely to do. When the models diverge, prediction fails, and the team falls back on explicit communication for every step. This is slow, error-prone, and exhausting. It is also, in most organizations, the default.
The AI alignment problem is a shared schema problem
The challenge of aligning artificial intelligence with human values is, at a fundamental level, a shared schema problem. How do you get a system that processes language statistically to share the schema that humans use to distinguish helpful from harmful, honest from deceptive, appropriate from inappropriate?
Reinforcement Learning from Human Feedback (RLHF) is the most widely deployed approach. Human raters evaluate AI outputs, and the model updates its behavior based on those evaluations. This is essentially socialization in Nonaka's framework — the AI absorbs a schema through exposure to human judgment, acquiring a tacit model of what "good" looks like.
Anthropic's Constitutional AI takes a different approach: instead of relying solely on human raters, the model is given an explicit set of principles — a "constitution" — and trained to evaluate its own outputs against those principles. This is externalization: the schema for what constitutes good behavior is made explicit and formalized, rather than remaining tacit in the heads of human raters.
Neither approach has fully solved the problem, and the reason is instructive. Human values are not a single schema — they are a vast, partially contradictory, context-dependent collection of schemas that differ across cultures, professions, and individuals. Aligning an AI is not a matter of transferring one schema; it is the challenge of building a system that can navigate the overlapping and sometimes conflicting schemas of billions of people. This is the same challenge every large organization faces, magnified to a civilizational scale.
The lesson for personal epistemology is direct: when you collaborate with AI tools — when you use a language model to draft, to analyze, to brainstorm — you are engaged in a shared schema problem. The model has a schema for what "helpful" means. You have a schema for what you actually need. The quality of the collaboration depends on how well you can externalize your schema (through clear prompts, examples, and constraints) and interpret the model's schema (through its patterns of response, its consistent biases, its areas of strength and weakness). This is not a technical skill. It is a schema literacy skill — and it is the same skill you need for every other collaboration in your life.
Protocol: building shared schemas deliberately
Shared schemas rarely form on their own with sufficient precision. Left to chance, people develop overlapping but misaligned models, and the gaps only surface under stress. The following protocol makes schema-building deliberate.
Step 1: Externalize your schema. Before a collaboration begins, write down your model for how the work operates. Not a process document — a schema map. What are the key entities? What are the relationships between them? What does "done" mean? What does "good" look like? What are the decision criteria? Do this independently before comparing with your collaborator.
Step 2: Compare and identify divergence. Place your schema maps side by side. Every point where the models differ is a point where implicit coordination will fail. These divergences are not problems to be embarrassed about — they are the most valuable information in the room. Name them explicitly: "You think testing happens before deployment; I think it happens after. Let's decide."
Step 3: Negotiate the shared schema. Not every divergence needs resolution. Some reflect genuine differences in context or expertise. But the schemas that govern handoffs, decision-making, and quality standards must converge. Write the agreed-upon schema down. Put it somewhere both parties will see it. This is your ubiquitous language.
Step 4: Stress-test under load. Schemas that look aligned in a calm planning session may diverge under pressure. Run a scenario: "The deploy fails at 2 AM. Walk me through what happens." If your stories diverge, the schema is not yet shared. Refine it.
Step 5: Spiral through the SECI cycle. Socialization: work together so you absorb each other's tacit models. Externalization: when someone does something surprising, stop and ask them to articulate the schema behind the decision. Combination: integrate your shared schema with existing team documentation. Internalization: practice until the shared schema becomes automatic. Then start again — because the schema will need to evolve as the work evolves.
What this makes possible
When shared schemas are explicit and maintained, several things change:
Coordination becomes implicit. You stop needing meetings to synchronize because you already know what the other person is doing and why. This is the core finding from the shared mental models research: overlapping schemas reduce communication overhead while improving outcomes.
Onboarding becomes transmission. A new team member does not need to figure out the schema by trial and error. You can hand them the externalized version — the schema map, the ubiquitous language, the decision criteria — and compress months of learning into days.
Conflict becomes diagnosable. When collaboration breaks down, you can ask: "Where did our schemas diverge?" instead of "Who screwed up?" Most collaborative failures are not failures of competence or effort. They are failures of schema alignment. Diagnosing the structural problem is faster and less destructive than assigning blame.
Scale becomes possible. An organization of ten people can maintain shared schemas through proximity and conversation. An organization of ten thousand cannot — unless those schemas are externalized, documented, and deliberately maintained. Conway's Law is not a curse. It is a design tool. The organizations that recognize this — that manage their schemas as carefully as they manage their code — are the ones that build systems worth maintaining.
The previous lesson established that schemas have scope — they work in some contexts and fail in others. This lesson adds the social dimension: schemas also have reach. A schema held by one person is a mental model. A schema shared by two people is a collaboration. A schema shared by an organization is a culture. And the deliberate practice of building, testing, and maintaining shared schemas is the foundation of every collaboration that works.
Sources and further reading:
- Hutchins, E. (1995). Cognition in the Wild. MIT Press.
- Cannon-Bowers, J.A., Salas, E., & Converse, S.A. (1993). "Shared Mental Models in Expert Team Decision-Making." In N.J. Castellan (Ed.), Individual and Group Decision Making, Lawrence Erlbaum.
- DeChurch, L.A. & Mesmer-Magnus, J.R. (2010). "Measuring Shared Team Mental Models: A Meta-Analysis." Group Dynamics: Theory, Research, and Practice, 14(1), 1-14.
- Nonaka, I. & Takeuchi, H. (1995). The Knowledge-Creating Company. Oxford University Press.
- Conway, M.E. (1968). "How Do Committees Invent?" Datamation, 14(4), 28-31.
- Evans, E. (2003). Domain-Driven Design: Tackling Complexity in the Heart of Software. Addison-Wesley.
- Bai, Y. et al. (2022). "Constitutional AI: Harmlessness from AI Feedback." Anthropic.