You already believe things are connected. You just haven't checked.
Right now, without thinking about it, you hold hundreds of beliefs about how the things in your life relate to each other. You believe that reading before bed improves your sleep. You believe that a particular colleague's involvement slows projects down. You believe that your best creative work happens in the morning. You believe that one of your habits supports a goal and another undermines it.
Some of these beliefs are accurate. Some are not. And you have almost certainly never written any of them down.
This is the problem. Not that you have beliefs about relationships — that's unavoidable and often useful. The problem is that unwritten relationships are unexaminable relationships. They live in the background of your thinking, shaping your decisions without ever being subjected to scrutiny. They function as assumptions: treated as true, never tested, rarely even noticed.
In L-0241, you established the foundational principle that relationships are as important as entities. The connections between things carry as much meaning as the things themselves. But recognizing that relationships matter is only the first step. The second step — the one that separates rigorous thinking from intuitive drift — is making those relationships explicit. Writing them down. Stating what connects to what, and how, and why you believe it.
This lesson is about why that act of writing down changes everything.
The science of assumed connections
The human brain is a relationship-generating machine. It does not wait for evidence before linking two things together. It links first and rationalizes later — if it bothers to rationalize at all.
Illusory correlation is the formal name for one of the most well-documented manifestations of this tendency. The term was introduced by psychologist Loren Chapman in 1967 after a series of experiments that revealed something uncomfortable about how humans process co-occurrence data. Chapman presented participants with lists of word pairs and asked them to estimate how frequently certain pairings appeared. Participants systematically overestimated the frequency of semantically associated pairs — words that "went together" in their minds — regardless of how often those pairs actually appeared in the data. The felt relationship overrode the observed data.
What made this finding genuinely alarming was the follow-up research. In 1969, Loren and Jean Chapman extended their investigation to clinical psychology, examining how practicing clinicians interpreted Draw-a-Person test results. The clinicians reported seeing correlations between specific drawing features and specific diagnoses — correlations that matched popular clinical lore but did not exist in the data they had been given. These were not students or novices. They were trained professionals, applying their expertise, and the relationships they perceived were fabrications of expectation.
This is not a marginal cognitive quirk. It is a fundamental feature of how human cognition works. The brain evolved to detect patterns because pattern detection is survival-critical — identifying the relationship between rustling grass and predators, between certain berries and illness, between cloud formations and storms. Psychiatrist Klaus Conrad coined the term apophenia in 1958 to describe the broader tendency to perceive meaningful connections between unrelated things. Michael Shermer later called it patternicity: the tendency to find meaningful patterns in meaningless noise. The evolutionary logic is straightforward — a false positive (seeing a connection that isn't there) is far less costly than a false negative (missing a connection that is there). So our brains are tuned to over-detect relationships.
This tuning served our ancestors well on the savanna. It serves us poorly when we are trying to build reliable models of how our ideas, projects, habits, and goals actually connect to each other. In those contexts, the relationships we assume but never examine become the invisible architecture of bad decisions.
Confirmation bias compounds the problem. Once you believe a relationship exists, you unconsciously seek evidence that supports it and filter out evidence that contradicts it. If you believe that a certain morning routine causes productive days, you will notice the productive days that follow the routine and overlook the productive days that don't — or the days you followed the routine and accomplished nothing. The assumed relationship doesn't just persist unchallenged. It actively recruits evidence for its own survival.
The combination of illusory correlation, apophenia, and confirmation bias creates a cognitive environment where false relationships are easy to form, difficult to detect, and resistant to correction. The only reliable intervention is the same one that works across every domain of rigorous thinking: make the implicit explicit. Write it down. State the claim. Then check it.
What "making it explicit" actually means
Making a relationship explicit is not the same as acknowledging that a relationship exists. People acknowledge relationships all the time — "Yeah, I think those two things are connected" — without ever specifying the connection with enough precision to evaluate it.
An explicit relationship has three components:
The entities. What two things are you claiming are related? Name them precisely. Not "my morning routine" and "my productivity," but "the specific practice of writing for 30 minutes before checking email" and "the number of focused work blocks I complete before noon." Vague entities produce vague relationships, and vague relationships cannot be tested.
The relationship type. How are these two things connected? Is the relationship causal (one produces the other), correlational (they tend to co-occur), temporal (one typically precedes the other), enabling (one makes the other possible), inhibiting (one makes the other harder), or something else? Each type has different implications and requires different evidence. Treating a correlation as a cause — or a temporal sequence as a causal chain — is one of the most common errors in informal reasoning.
The evidence basis. Why do you believe this relationship exists? Have you observed it systematically, or is it an impression? Did someone tell you about it, or did you derive it from experience? Is your evidence a single memorable instance, or a pattern across many instances? Stating your evidence basis doesn't require rigorous experimentation. It requires honesty. And honest assessment of evidence is impossible when the relationship has never been articulated in the first place.
This three-part structure — entities, type, evidence — is the minimum viable specification of an explicit relationship. Anything less is still an assumption wearing the costume of a belief.
Lessons from systems that require explicit relationships
The value of making relationships explicit is not a philosophical abstraction. It has been demonstrated, at scale, in every domain where people have tried to build reliable systems on top of complex knowledge.
Database design confronted this problem in the 1970s. Before Peter Chen published his landmark 1976 paper introducing the entity-relationship model, database designers organized information around records — rows of data stored in tables. The records described entities well enough, but the relationships between entities were implicit, buried in matching ID numbers and application logic that programmers had to remember rather than see. Chen's insight was that the real world consists of entities and relationships, and both deserve first-class representation. His entity-relationship diagrams made connections visible: this customer places these orders, this order contains these products, this product belongs to these categories. The relationships were no longer hidden inside code. They were stated, drawn, and enforceable. The impact was immediate and permanent. Every major database system adopted some form of explicit relationship modeling. The software industry learned, through expensive failures, that implicit relationships are where bugs hide.
Software architecture learned the same lesson through a different mechanism. The Explicit Dependencies Principle in software engineering states that methods and classes should explicitly declare every collaborating object they need in order to function. When dependencies are implicit — when a piece of code silently relies on something without declaring that reliance — the system becomes fragile in ways that are invisible to anyone reading the code. Changes to one component break another component that nobody knew was connected. The dependency existed, but because it was implicit, it was also invisible, untestable, and unmanageable. Making dependencies explicit doesn't add complexity. It reveals complexity that was already there but hidden. The same is true of the relationships in your thinking.
Knowledge graphs and ontologies represent the most sophisticated modern expression of the same principle. A knowledge graph stores information as a network of entities connected by explicitly typed relationships. An ontology provides the schema — the formal specification of what kinds of entities exist, what kinds of relationships are possible, and what rules govern those relationships. The critical word in the standard definition of an ontology is "explicit": it is a formal, explicit specification of a shared conceptualization of a domain. Every relationship is named, typed, and visible. Nothing is left to assumption. This is what makes knowledge graphs queryable, reasonable, and trustworthy in ways that unstructured text is not. When you ask a knowledge graph "what causes X?" it can give you a precise answer because the causal relationships were explicitly declared. Ask the same question of a pile of documents, and you get whatever the reader's assumptions project onto the text.
Cognitive psychology formalized the structure of explicit relationships through semantic networks. In 1969, Allan Collins and M. Ross Quillian proposed that human memory organizes concepts as nodes in a network, connected by labeled relationships. "A canary IS-A bird." "A bird HAS wings." "A bird CAN fly." Each relationship is a specific, typed connection — not a vague association but a stated claim about how two concepts relate. Their model revealed something important: retrieval time depended on how many relationship links had to be traversed. The structure of your explicit relationships directly affects how efficiently you can think with them. When relationships are implicit, the traversal path is unclear, retrieval is unreliable, and the wrong connections get activated through what Collins and Elizabeth Loftus later described as spreading activation — the tendency of activating one concept to automatically activate nearby concepts, regardless of whether the relationships between them are valid.
The lesson from all of these domains is the same: systems that leave relationships implicit accumulate invisible errors. Systems that make relationships explicit can be inspected, tested, corrected, and improved.
The cost of implicit relationships in personal thinking
You might wonder whether the stakes in personal cognition are really comparable to database design or software architecture. They are. In some ways, they are higher — because in a database, an implicit dependency eventually produces a visible error. In your thinking, an implicit relationship can persist uncorrected for decades.
Consider how implicit relationships shape decisions:
Career assumptions. You believe that developing a specific skill will lead to the career outcome you want. But is the relationship causal, or is it an artifact of survivorship bias — you've seen people with that skill who succeeded, without noticing the many people with the same skill who didn't? Until you write down "Skill X enables outcome Y because [specific mechanism]," the assumption operates unchecked.
Relationship assumptions. You believe that a particular behavior by another person means something specific about their feelings or intentions. But you've never articulated the relationship: "When person X does behavior Y, it indicates state Z." If you did write it down, you might notice that your evidence is a single instance, or that the same behavior has meant different things in different contexts, or that you've been interpreting through the lens of a prior relationship rather than the current one.
Health assumptions. You believe that a particular food, exercise, or sleep habit affects your energy or mood in a specific way. But when you try to state the relationship explicitly — the entities, the type, the evidence — you discover that your "evidence" is a few salient memories, not a systematic observation.
Strategic assumptions. You believe that two of your goals support each other, or that a particular activity serves a particular objective. But you've never stated the mechanism. When you finally do, you sometimes discover that the relationship is weaker than you thought, or inverted, or nonexistent — and you've been allocating time and energy based on a connection that was never real.
Each of these examples follows the same pattern: an implicit relationship that feels true, goes unexamined, and shapes behavior in ways that may or may not serve you. The act of making the relationship explicit doesn't automatically correct it. But it does make correction possible. You cannot evaluate what you cannot see, and you cannot see what you have not written down.
Your Third Brain: from implicit associations to explicit knowledge graphs
Artificial intelligence provides a vivid illustration of what happens when you move from implicit to explicit relationships — and what goes wrong when you don't.
Large language models like GPT-4 and Claude learn relationships from statistical patterns in text. They can generate remarkably fluent statements about how things relate to each other. But the relationships they express are implicit — encoded in billions of neural network weights, not stored as explicit, inspectable claims. This is why LLMs can confidently state a relationship that is flatly wrong. The model has no mechanism for examining its own relationship claims, because those claims are not discrete, labeled connections. They are emergent patterns in a continuous numerical space.
Knowledge graphs take the opposite approach. In a knowledge graph, every relationship is an explicit triple: subject, predicate, object. "Aspirin TREATS headaches." "Headaches ARE-CAUSED-BY dehydration." "Dehydration IS-PREVENTED-BY water intake." Each claim is individually addressable. You can query it, verify it, trace its provenance, and update it without affecting unrelated claims. This is why knowledge graphs are used in high-stakes domains like medicine, finance, and legal reasoning — domains where acting on an assumed relationship that turns out to be wrong has serious consequences.
Nonaka and Takeuchi's SECI model of knowledge creation, developed from studying innovation in Japanese companies in the 1980s and 1990s, describes a cycle of knowledge conversion that moves between tacit and explicit forms. The critical phase for our purposes is externalization — the process of making tacit knowledge explicit, wherein knowledge is crystallized and becomes available for inspection and sharing. Nonaka and Takeuchi found that externalization is where the most valuable organizational knowledge creation happens, precisely because it forces the unstated to become stated. A relationship that exists only in someone's head — a tacit association, a gut feeling about how two things connect — cannot be challenged, refined, or built upon by others. Once externalized, it can be all of those things.
Your personal knowledge system faces the same challenge. You carry thousands of implicit relationships in your head — connections between ideas, between habits and outcomes, between people and capabilities, between past experiences and future expectations. These relationships are your tacit knowledge. They inform every decision you make. And most of them have never been externalized.
Building your own explicit relationship infrastructure — your Third Brain — means taking the most important of these tacit relationships and converting them into stated, typed, inspectable connections. Not all of them. That would be paralyzing. But the ones that drive your most consequential decisions? Those deserve to be written down.
Protocol: The relationship audit
Here is the operational protocol for identifying and explicating your most important assumed relationships. Perform this exercise once now, and then repeat it quarterly as part of your system maintenance practice.
-
Choose a decision domain. Pick one area where you make regular, consequential decisions: work strategy, personal health, relationship management, financial allocation, learning priorities. You'll audit one domain at a time.
-
List ten believed relationships. Write down ten pairs of things you believe are connected in this domain. Don't overthink it. Write whatever comes to mind: "X leads to Y," "A depends on B," "P causes Q." Speed matters here — you want to capture the relationships your brain offers up naturally, because those are the ones operating in the background.
-
Specify each relationship. For each pair, write one sentence that makes the relationship explicit using the three-part structure: (a) name both entities precisely, (b) state the relationship type (causal, correlational, temporal, enabling, inhibiting, or other), and (c) state your evidence. Example: "Checking email first thing in the morning [entity A] causes a scattered, reactive workday [entity B]. Type: causal. Evidence: I noticed this on three occasions last month, but I haven't tracked it systematically."
-
Classify your evidence. For each relationship, mark the evidence as one of three levels: Verified (you have systematic observations or reliable external sources), Plausible (you have some evidence and a reasonable mechanism, but haven't tested it rigorously), or Assumed (you believe it, but your evidence is anecdotal, inherited from someone else, or simply "it feels right").
-
Interrogate the assumptions. For every relationship marked "Assumed," ask: What would I expect to see if this relationship is real? What would I expect to see if it is not? Is there a simple observation I could make in the next two weeks to move this relationship from "Assumed" to "Plausible" or "Disproven"?
-
Record the results. Keep this audit. It is the beginning of your explicit relationship inventory — a document you will build on as you progress through this phase. Relationships that survive scrutiny become reliable infrastructure. Relationships that don't survive scrutiny save you from decisions based on illusions.
The goal is not to prove your relationships wrong. Most of them will hold up. The goal is to know which ones hold up, which ones are fragile, and which ones were never real. That knowledge is the difference between a cognitive infrastructure you can trust and one that merely feels trustworthy.
Toward a vocabulary of connection
You've now established the second foundational principle of relationship mapping: relationships must be explicit to be useful. Felt connections, intuitive associations, and inherited assumptions all have value as starting points — but they become reliable infrastructure only when they are stated, typed, and subjected to evidence.
But stating a relationship explicitly raises an immediate question: what kinds of relationships are there? When you write "A relates to B," you need a vocabulary for what "relates to" means. Is A a cause of B? A prerequisite for B? A special case of B? An alternative to B? A contradiction of B? Each of these is a fundamentally different kind of connection, and confusing them produces the same errors as leaving the relationship implicit.
That is exactly where Phase 13 goes next. In L-0243, you will build a taxonomy of relationship types — a structured vocabulary for the different ways things can connect. If this lesson gave you the discipline of writing relationships down, the next lesson gives you the language for writing them precisely.
Sources
- Chapman, L. J. (1967). Illusory correlation in observational report. Journal of Verbal Learning and Verbal Behavior, 6(1), 151-155.
- Chapman, L. J., & Chapman, J. P. (1969). Illusory correlation as an obstacle to the use of valid psychodiagnostic signs. Journal of Abnormal Psychology, 74(3), 271-280.
- Conrad, K. (1958). Die beginnende Schizophrenie. Stuttgart: Thieme.
- Chen, P. P. (1976). The entity-relationship model — toward a unified view of data. ACM Transactions on Database Systems, 1(1), 9-36.
- Collins, A. M., & Quillian, M. R. (1969). Retrieval time from semantic memory. Journal of Verbal Learning and Verbal Behavior, 8(2), 240-247.
- Collins, A. M., & Loftus, E. F. (1975). A spreading-activation theory of semantic processing. Psychological Review, 82(6), 407-428.
- Nonaka, I., & Takeuchi, H. (1995). The Knowledge-Creating Company. New York: Oxford University Press.
- Shermer, M. (2008). Patternicity: Finding meaningful patterns in meaningless noise. Scientific American, 299(6).