Not all connections work the same way
You already know that making relationships explicit is better than leaving them as assumptions. The previous lesson established that principle. But there is a deeper problem that explicit labeling alone does not solve: two relationships can both be explicit and still be fundamentally different in kind.
"Sleep affects mood" and "the brain contains the amygdala" are both relationships between two things. But they operate by completely different logics. The first is causal — change one and you change the other. The second is compositional — the amygdala is part of the brain, not its effect. If you treat both relationships the same way in your thinking — if your mental model has only one kind of arrow — you will make systematic errors. You will try to intervene on structures that need to be composed, or compose things that need to be sequenced, or sequence things that are merely correlated.
The types of relationships you recognize determine the types of reasoning you can perform. A vocabulary of one relationship type gives you one mode of thinking. A vocabulary of seven gives you seven. This lesson builds that vocabulary.
Why relationship types matter: the reasoning each one enables
Every formal knowledge system ever built — from Aristotle's categories to modern knowledge graphs — has had to confront the same question: what kinds of connections exist between things? The answer is not academic. Each type of relationship licenses a different inference pattern, and confusing one type for another produces not just imprecision but outright errors in reasoning.
Consider what happens when you confuse a hierarchical relationship for a causal one. "Mammals are animals" is hierarchical — it describes class membership. If you treat it as causal ("being a mammal causes being an animal"), you get nonsense. You cannot intervene on mammal-ness to produce animal-ness. The hierarchy tells you about classification, not mechanism. Getting the type wrong means importing an inference pattern that does not apply.
This is why knowledge representation systems — from the Resource Description Framework (RDF) that structures the semantic web to the Unified Modeling Language (UML) used in software engineering to the entity-relationship models behind every relational database — all invest heavily in distinguishing relationship types. They do this not for taxonomic elegance but because the type of a relationship determines what operations you can perform across it.
The core relationship types
Below is a working taxonomy of the relationship types you will encounter most often in your own thinking, knowledge systems, and decision-making. This is not exhaustive — specialized domains define additional types — but these seven cover the vast majority of connections you need to reason about.
Causal relationships
A causal relationship means that a change in one thing produces a change in another through some mechanism. "Smoking causes lung cancer" is causal. "Raising interest rates reduces inflation" is causal. The defining feature is interventionability: if you change the cause, the effect changes.
Judea Pearl, whose work on causal inference earned him the Turing Award, formalized the distinction between causal and non-causal relationships through what he called the Ladder of Causation. The first rung is association — observing that two things co-occur (seeing). The second rung is intervention — knowing what happens when you change one thing (doing). The third rung is counterfactual — knowing what would have happened if things had been different (imagining). Only the second and third rungs are truly causal. Most of what people casually call "causes" lives on the first rung — they are associations that have not been tested by intervention.
Pearl's framework makes a critical operational point: you cannot determine causation from observational data alone, no matter how much of it you have. The back-door paths in a causal diagram carry spurious associations — confounders that make two things appear causally linked when they are not. Blocking these spurious paths through experimental design or statistical adjustment is what separates genuine causal claims from dressed-up correlations.
The inference causal relationships license: If A causes B, then intervening on A will change B. Removing A will reduce or eliminate B. Strengthening A will intensify B. You can reason forward from cause to effect and backward from effect to potential cause.
Hierarchical relationships
A hierarchical relationship means that one thing is a type of, or class of, another. "A golden retriever is a dog" is hierarchical. "Anxiety is an emotion" is hierarchical. In formal knowledge systems, this is called the is-a or hyponymy relationship.
WordNet, the lexical database developed at Princeton that has become a foundational resource in computational linguistics, organizes its entire noun hierarchy through hyponymy. "Vehicle" is a hypernym (parent) of "car," which is a hypernym of "sedan." The relationship is transitive: if a sedan is a car and a car is a vehicle, then a sedan is a vehicle.
Hierarchical relationships are the backbone of every taxonomy. The biological classification system (kingdom, phylum, class, order, family, genus, species) is pure hierarchy. So is every org chart, every file folder structure, and every category system you have ever used. In UML, this is modeled as generalization — the "is-a" relationship between a superclass and its subclass.
The inference hierarchical relationships license: Properties of the parent class transfer to the child. If mammals are warm-blooded, then dogs are warm-blooded. This is called inheritance, and it is the most powerful reasoning shortcut that hierarchies provide. You do not need to verify every property of every subtype — you inherit them from the type.
Compositional relationships
A compositional relationship means that one thing is part of another. "A wheel is part of a car" is compositional. "Chapter 3 is part of this book" is compositional. In formal terminology, this is meronymy (part-of) and its inverse holonymy (whole-of).
WordNet captures meronymy alongside hyponymy, recognizing that "has parts" is as fundamental to meaning as "is a type of." The key distinction from hierarchy: in a hierarchical relationship, the child is the parent (a dog is an animal). In a compositional relationship, the part is contained in the whole but is not the same kind of thing as the whole (a wheel is not a car).
UML distinguishes two strengths of compositional relationship. Aggregation means the part can exist independently of the whole — a professor is part of a department, but the professor continues to exist if the department is dissolved. Composition means the part cannot exist without the whole — a room is part of a building, and destroying the building destroys the room. This distinction matters because it determines what happens when you modify the whole: aggregated parts survive; composed parts do not.
The inference compositional relationships license: Properties of parts sometimes transfer to the whole (a car with a flat tire is a compromised car). Changes to the whole affect the parts (dissolving a team dissolves its roles). You can reason about the system by reasoning about its components — but only if you know which components are aggregated and which are composed.
Temporal relationships
A temporal relationship means that one thing comes before, after, or during another in time. "Childhood precedes adulthood" is temporal. "The Renaissance occurred after the Middle Ages" is temporal. The defining feature is time ordering without necessarily implying causation.
This is the relationship type that people most often confuse with causation. "I ate sushi, then I got sick" establishes a temporal relationship — the sushi came first. But temporal precedence alone does not establish causation. The logical fallacy post hoc ergo propter hoc (after this, therefore because of this) is precisely the error of collapsing temporal relationships into causal ones.
Temporal relationships matter enormously in process design, project management, and narrative construction. A Gantt chart is a map of temporal relationships: Task A must finish before Task B can start. A medical history is a temporal sequence: symptoms appeared in this order. Getting the temporal ordering right is prerequisite to identifying causation, but it is not sufficient.
The inference temporal relationships license: Sequencing and scheduling — knowing what comes first, what comes next, what can happen in parallel. Temporal relationships tell you about order but not about mechanism. They answer "when" but not "why."
Dependency relationships
A dependency relationship means that one thing requires another to function, exist, or be meaningful. "This software module depends on that library" is a dependency. "Understanding algebra depends on understanding arithmetic" is a dependency. In UML, dependency is represented as a dashed arrow — the client depends on the supplier, and changes to the supplier may break the client.
Dependencies are directional: A depends on B does not imply B depends on A. They are also the source of the most consequential fragilities in any system. Peter Chen's entity-relationship model, which became the foundation of relational database design after his 1976 paper, introduced the concept of participation constraints — whether an entity must participate in a relationship (total dependency) or merely can (partial dependency). This distinction between "must have" and "can have" is the difference between a system that fails gracefully and one that collapses.
In your own knowledge system, prerequisites are dependencies. This lesson depends on L-0242, which established that explicit relationships replace assumptions. If you skipped that lesson, certain concepts here will lack their foundation. This is dependency in action.
The inference dependency relationships license: Impact analysis — if B changes or disappears, what happens to everything that depends on it? Build ordering — what must be in place before something else can be constructed? Fragility assessment — what has the most dependents, and therefore represents the greatest single point of failure?
Associative relationships
An associative relationship means that two things are connected in a way that is neither hierarchical, compositional, causal, temporal, nor dependency-based — but the connection is still meaningful. "Salt and pepper" is associative. "Doctors and hospitals" is associative. "Jazz and improvisation" is associative. These things co-occur, are conceptually related, and tend to activate each other in thought, but none of the more specific relationship types captures why.
In thesaurus standards (ISO 25964) and the SKOS vocabulary used for the semantic web, associative relationships are captured by the "Related Term" (RT) designation — a deliberately broad category for connections that matter but resist precise typing. This is not vagueness. It is an honest acknowledgment that some relationships are real but do not fit a stronger type.
Psychology has studied associative connections extensively. Cognitive research shows that two concepts are associatively linked when activating one reliably activates the other — hearing "doctor" primes "nurse," seeing salt makes you think of pepper. These associations can be compositional (doctor/hospital), contrastive (hot/cold), or purely experiential (a song linked to a memory). The association is real — it affects reaction time, memory retrieval, and decision-making — even when no causal or structural relationship exists.
The inference associative relationships license: Retrieval and recall — associated concepts help you find related information. Brainstorming and creative connection — associations bridge domains. But associations do not license causal intervention, hierarchical inheritance, or compositional reasoning. The danger is treating associations as if they were causal, which is precisely the error that Pearl's Ladder of Causation was designed to prevent.
Contradictory relationships
A contradictory relationship means that two things are in tension — holding one makes holding the other difficult or impossible. "Move fast and break things" contradicts "zero defect tolerance." "Individual autonomy" is in tension with "team alignment." The defining feature is that both elements may be independently valid, but they pull in different directions when combined.
Contradictory relationships are the most underrepresented type in most knowledge systems. Databases do not have a field for "this record conflicts with that record." Note-taking tools do not prompt you to mark which ideas are in tension. But in your actual thinking, contradictions are some of the most information-rich relationships you can identify. They mark the places where your models are incomplete, where trade-offs must be made, and where deeper investigation will yield the most insight. Phase 19 of this curriculum addresses contradiction resolution in depth — but first you need to be able to see contradictions as a relationship type, not just as errors.
The inference contradictory relationships license: Trade-off analysis — you cannot optimize for both simultaneously and must choose or find a synthesis. Model incompleteness detection — a contradiction often signals that your framework is missing a variable. Dialectical reasoning — thesis and antithesis can sometimes produce a synthesis that transcends both.
Why your brain defaults to one type
Human cognition has a strong bias toward causal relationships. When you see two events co-occur, your first instinct is to construct a causal story connecting them. Daniel Kahneman documented this extensively in his description of what he called "System 1" thinking — the fast, automatic, narrative-constructing mode that turns sequences into stories and correlations into causes. This is efficient. Causal reasoning is the most action-relevant form of reasoning: if A causes B, you know what to do about B.
But this efficiency comes at a cost. When you see every relationship as causal, you miss the hierarchical relationships that would help you classify, the compositional relationships that would help you decompose, the temporal relationships that would help you sequence, the dependencies that would help you assess fragility, the associations that would help you explore, and the contradictions that would help you refine.
Building a vocabulary of relationship types is not about being more precise for precision's sake. It is about unlocking reasoning modes that a single-type vocabulary cannot access.
AI and relationship typing in your Third Brain
When you use an AI assistant to help you think through a problem, the relationship types you specify dramatically affect the quality of the output. If you tell an AI "these things are connected" without specifying how, the AI will default to the most common pattern in its training data — usually a vague causal or associative narrative.
But if you say "map the dependency relationships between these project components," or "identify which of these connections are causal versus merely associative," or "show me the hierarchical structure of these concepts," you activate entirely different reasoning pathways. The AI can distinguish between these types — but only if you ask it to. The vocabulary of relationship types becomes the interface between your epistemic precision and the machine's processing power.
This is especially powerful for knowledge graph construction. Modern AI tools can ingest a body of text and extract relationships, but the default extraction treats all relationships as generic "is related to" connections. When you provide a typed schema — specifying that you want causal, hierarchical, compositional, dependency, and associative relationships distinguished — the resulting graph is dramatically more useful. You can traverse it with type-specific queries: "Show me everything that depends on X." "What are the parts of Y?" "What contradicts Z?"
The relationship types are not just labels. They are query interfaces.
Protocol: typing your relationships
Apply this protocol whenever you create or review a relationship in your knowledge system, your project plans, or your thinking:
-
Name both nodes. What two things are connected? Be specific. "Team morale" and "project outcomes" is better than "soft stuff" and "results."
-
Ask the type question. Which of these best describes the connection: causal, hierarchical, compositional, temporal, dependency, associative, or contradictory? If none fit, you may have discovered a domain-specific relationship type worth naming.
-
Test your assignment. For each type, there is a diagnostic question:
- Causal: If I intervene on A, does B change?
- Hierarchical: Is A a type of B (or vice versa)?
- Compositional: Is A a part of B (or vice versa)?
- Temporal: Does A come before, after, or during B?
- Dependency: Does A require B to function?
- Associative: Are A and B related in a way that none of the above captures?
- Contradictory: Does holding A make holding B harder?
-
Mark uncertain types explicitly. If you cannot determine the type, label the relationship "untyped" rather than guessing. An untyped relationship is an honest gap. A mistyped relationship is a hidden error that will corrupt downstream reasoning.
-
Revisit as evidence accumulates. Relationship types can change as your understanding deepens. What you initially mark as associative may later reveal itself as causal when you discover the mechanism. What you call causal may turn out to be merely temporal when you control for confounders. Typing is a hypothesis, not a permanent label.
The bridge to direction
You now have a vocabulary for the types of relationships you will encounter. But there is another dimension that this lesson has only hinted at: direction. "A causes B" is not the same as "B causes A." A hierarchy flows from general to specific. A dependency points from dependent to required. Some relationships, however, have no inherent direction — association is symmetric, and some contradictions are mutual.
The next lesson — directed versus undirected relationships — takes up this question directly. Knowing the type tells you what kind of reasoning a relationship supports. Knowing the direction tells you which way the reasoning flows.
Sources
- Pearl, J. & Mackenzie, D. (2018). The Book of Why: The New Science of Cause and Effect. Basic Books.
- Pearl, J. (2009). Causality: Models, Reasoning, and Inference. 2nd ed. Cambridge University Press.
- Miller, G.A. (1995). "WordNet: A Lexical Database for English." Communications of the ACM, 38(11), 39-41.
- Chen, P.P. (1976). "The Entity-Relationship Model: Toward a Unified View of Data." ACM Transactions on Database Systems, 1(1), 9-36.
- Rumbaugh, J., Jacobson, I. & Booch, G. (2004). The Unified Modeling Language Reference Manual. 2nd ed. Addison-Wesley.
- W3C. (2014). RDF 1.1 Primer. https://www.w3.org/TR/rdf11-primer/
- ISO 25964-1:2011. Thesauri and interoperability with other vocabularies.
- Kahneman, D. (2011). Thinking, Fast and Slow. Farrar, Straus and Giroux.