You are connected to people you have never met through people you know.
In 1967, the social psychologist Stanley Milgram mailed 296 packets to randomly selected people in Omaha, Nebraska and Wichita, Kansas. Each packet contained the name and address of a stockbroker in Boston. The instructions were simple: if you know this person, mail the packet directly to him. If you don't, mail it to someone you know on a first-name basis who you think is more likely to know him. Each person in the chain was to do the same.
Of the packets that reached their target, the average chain length was about six. Six handoffs. Six transitive hops from a stranger in Nebraska to a specific stockbroker in Massachusetts. This is the origin of "six degrees of separation" — a phrase Milgram never actually used, but which captured a structural truth about how human networks operate. You are not directly connected to most people. But you are transitively connected to almost everyone, through a surprisingly short chain of intermediaries.
This is what transitivity does. It takes relationships that exist between adjacent pairs — A knows B, B knows C — and propagates an implied relationship across the chain: A is connected to C. Not directly. Not with the same strength. But connected, through the structure of the network itself.
Over the previous fifteen lessons in this phase, you have built up a vocabulary for understanding relationships. You know that relationships are as important as entities (L-0241), that they have types and directions (L-0242, L-0243), that they carry different weights (L-0244), and that they can be explicit or implicit (L-0246). You have mapped causal chains (L-0251), identified feedback loops (L-0252), and discovered that missing relationships often matter more than present ones (L-0253). You have seen that mapping relationships reveals system structure (L-0254) and that relationships change over time (L-0255).
Now the next structural principle: relationships don't just connect adjacent nodes. They propagate through chains, creating implied connections between nodes that have never directly interacted. Understanding when this propagation is real and when it is an illusion is one of the most consequential skills in relationship mapping.
The mathematics of transitivity
The transitive property is one of the oldest formal concepts in mathematics. A relation R on a set is transitive if, whenever A is related to B and B is related to C, then A is related to C. In notation: if (a, b) is in R and (b, c) is in R, then (a, c) must also be in R.
Some relations are inherently transitive. "Is greater than" among numbers: if 7 > 5 and 5 > 3, then 7 > 3. Always. Without exception. "Is an ancestor of" among people: if your grandmother is an ancestor of your mother, and your mother is an ancestor of you, then your grandmother is an ancestor of you. The chain holds no matter how many generations you extend it. "Is a subset of" in set theory: if A is a subset of B and B is a subset of C, then every element of A is necessarily an element of C.
Other relations are definitively not transitive. "Is a friend of" — you are friends with someone, and they are friends with someone you have never met. That does not make you friends. "Is the mother of" — Alice is the mother of Beth, and Beth is the mother of Carol, but Alice is not the mother of Carol. The relationship type changes across hops: mother becomes grandmother. "Is close to" in physical space — city A is 50 miles from city B, and city B is 50 miles from city C, but A and C could be anywhere from 0 to 100 miles apart, depending on direction.
The critical insight is that transitivity is a property of the relationship type, not of the entities involved. The same two people can be connected by a transitive relationship ("reports to the same CEO as") and a non-transitive one ("is friends with") simultaneously. When you confuse which of your relationships are transitive and which are not, you draw inferences that the network structure does not actually support.
In graph theory, this distinction is formalized through the concept of transitive closure — the operation of computing all implied connections that follow from direct ones. Given a graph of direct relationships, the transitive closure is a new graph that includes an edge between every pair of nodes connected by any path, no matter how long. Stephen Warshall published his algorithm for computing transitive closure in 1962, and the operation remains fundamental to database query optimization, network reachability analysis, and automated reasoning to this day. When you ask "can I reach node C from node A?" you are asking a transitive closure question. The answer depends not on whether A and C are directly linked, but on whether there exists any chain of direct links between them.
How effects propagate through chains
The mathematical definition tells you whether a transitive path exists. But in real systems, the more urgent question is: how much effect propagates along that path, and how does it change with each hop?
Consider the bullwhip effect in supply chains. A small fluctuation in retail demand — say, a 5% increase in customer purchases — gets amplified as it travels upstream through the chain. The retailer increases their order to the distributor by 10%. The distributor, seeing an uptick, orders 20% more from the manufacturer. The manufacturer, trying to avoid shortages, ramps production by 40%. A 5% signal at the retail end becomes a 40% signal at the manufacturing end. The effect didn't just propagate transitively through the chain — it amplified at each hop. Researchers Hau Lee, Padmanabhan, and Whang documented this phenomenon in their landmark 1997 study, identifying four structural causes: demand signal processing, order batching, price fluctuations, and rationing. The lesson is that transitive propagation in supply chains is not neutral transmission. The chain distorts the signal, and the distortion compounds.
Now consider the opposite pattern: trust decay across social networks. Richters and Peixoto, in a 2011 study published in PLOS ONE, analyzed trust transitivity using a formal metric of propagation. They found that trust does propagate through chains — but it degrades at each hop. If you trust Alice with confidence 0.9, and Alice trusts Bob with confidence 0.8, your inferred trust in Bob is not 0.8. It is something closer to 0.72 — the product of the two confidence values. Over three hops, a chain of individually strong trust relationships (0.9, 0.8, 0.85) produces an endpoint trust of roughly 0.61. Over five hops, it's negligible. Their research revealed something deeper: the viability of trust propagation in large networks requires a non-zero fraction of what they call "absolute trust" — relationships with full confidence. Without these anchor points, transitive trust dissipates entirely before it can traverse the network. The system needs high-confidence nodes to act as relay stations.
These two patterns — amplification and decay — represent the two fundamental modes of transitive propagation. Some effects grow stronger as they travel through chains. Others weaken. Knowing which mode applies to a given relationship type is the difference between seeing real structure and hallucinating connections that are not there.
Transitive inference in the human mind
Your brain performs transitive inference automatically, and it starts doing so earlier than researchers originally expected.
Piaget argued that transitive reasoning — the ability to infer that if A > B and B > C, then A > C — did not emerge until age seven or eight, during the concrete operational stage of cognitive development. But Bryant and Trabasso's 1971 experiments challenged this timeline, demonstrating that children as young as four could make transitive inferences when memory demands were reduced. The issue was not that younger children lacked the reasoning capacity — they lacked the working memory to hold the premises simultaneously. When the premises were made easier to remember, the transitive inference followed naturally.
This finding has been replicated across species. Transitive inference has been documented in monkeys, crows, pigeons, fish, and even wasps. If a jay learns that it dominates bird B and bird B dominates bird C, the jay will behave as though it dominates bird C — even though the two have never interacted. The inference is not taught. It is constructed from the relationship chain. This strongly suggests that transitive reasoning is not a culturally learned skill but a fundamental cognitive operation — one that evolution selected for because organisms that could infer indirect relationships from direct ones had a survival advantage.
But here is where the cognitive science gets uncomfortable: humans over-apply transitive inference. We extend it to relationship types where it does not hold. You read a book recommended by someone whose taste you trust, and that person was recommended by another person whose taste you trust — so you treat the book recommendation as doubly endorsed. But taste is not transitive. The chain of "trusts the judgment of" does not guarantee that the endpoint recommendation will match your preferences, because each person in the chain applies different criteria. You are inferring a relationship (this book matches my taste) from a chain of relationships (I trust A's taste, A trusts B's taste, B recommends this book) where the transitivity assumption fails.
This over-application is one of the most common reasoning errors in epistemic infrastructure. You assume that because you can trace a path, the endpoint relationship is valid. But the validity depends entirely on whether the specific relationship type is actually transitive — and most of the relationships that matter in daily life are only partially transitive at best.
PageRank and the architecture of transitive importance
The most consequential application of transitive relationships in the modern world is one you use every day without thinking about it.
In 1996, Larry Page and Sergey Brin, then graduate students at Stanford, formalized an insight that would restructure how humanity accesses information. They observed that the web's hyperlink structure contains implicit judgments about importance: when page A links to page B, A is making a statement that B is worth visiting. But Page and Brin took this one step further. They asked: what if the importance of A's endorsement depends on how important A itself is? And what if A's importance, in turn, depends on how important the pages linking to A are?
This is transitivity, applied recursively. A link from an obscure blog post and a link from the New York Times homepage both count as endorsements — but the Times link carries more weight, because the Times page itself has been endorsed by thousands of other important pages. Importance propagates through the link graph, with each hop carrying forward a fraction of the upstream authority. The PageRank algorithm computes this propagation to a stable equilibrium, assigning every page on the web a score that reflects not just its direct endorsements but the entire transitive chain of authority flowing into it.
The algorithm includes a damping factor — typically set at 0.85 — which means that at each hop, 15% of the authority is lost. This models a crucial property of real transitive systems: propagation is not lossless. The further you get from the source of authority, the weaker the transitive signal becomes. Without the damping factor, the system would be unstable — authority would circulate endlessly through link loops, accumulating without bound. The damping factor is what makes the transitive computation converge to a meaningful answer.
PageRank demonstrates that transitive relationships are not merely an abstract mathematical property. They are a structural force that shapes what information you encounter, which voices you hear, and whose authority you implicitly accept. Every search result you click has been ranked by an algorithm that propagates importance transitively through billions of relationships. You are living inside a transitive closure, computed at global scale.
Where transitivity breaks — and why it matters
The danger of understanding transitive relationships is not that you will fail to see them. It is that you will see them everywhere, including where they do not exist.
There are three structural reasons why a seemingly transitive chain can fail.
The relationship type shifts across hops. You report to your manager. Your manager reports to the VP. The VP reports to the CEO. The relationship "reports to" is transitive — you are in the CEO's reporting chain. But the relationship "has aligned priorities with" is not. Your manager may prioritize client retention; the VP may prioritize market expansion; the CEO may prioritize regulatory compliance. The reporting chain is intact, but the priority chain is incoherent. When you assume that organizational hierarchy makes strategic alignment transitive, you build plans on a foundation that does not exist.
The relationship degrades below a useful threshold. Even when transitivity technically holds, the signal may decay to noise. You trust your colleague's recommendation (confidence: high). Your colleague trusts a former classmate's expertise (confidence: moderate). That classmate trusts a blog post they read (confidence: low). By the time the information reaches you through this chain, the aggregate confidence is the product of three declining values — which may be too low to justify any action. The chain is transitive, but the propagated effect is indistinguishable from random noise.
Context changes between links. A relationship that holds in one context may not transfer to another. Your financial advisor is excellent at tax planning. She recommends an estate attorney who is excellent at trust formation. That attorney recommends an insurance broker. Each recommendation is valid within its own context. But the transitive inference — that you should trust the insurance broker's judgment on the same basis you trust your financial advisor's — fails because the domain of expertise shifts at each hop. Competence is not a general-purpose property that propagates transitively. It is domain-specific.
Recognizing these failure modes is not pessimism about transitivity. It is precision about when transitivity works and when it doesn't. The goal is to trace transitive chains deliberately, testing at each hop whether the specific relationship type actually supports propagation.
Your Third Brain: transitive reasoning at machine scale
Knowledge graphs — the structured databases that power modern AI systems — are built on transitive relationships as a core reasoning primitive. In a knowledge graph, entities are nodes and relationships are typed edges. When you tell the system that "Oxford is located in Oxfordshire" and "Oxfordshire is located in England," the system infers that "Oxford is located in England" because the "is located in" relationship is defined as transitive. This single inference rule, applied across millions of relationships, allows knowledge graphs to answer questions about indirect connections that were never explicitly stated.
Large language models perform a version of transitive reasoning when they engage in multi-hop question answering. "Who is the president of the country where the Eiffel Tower is located?" requires two hops: Eiffel Tower is in Paris, Paris is in France, France's president is the answer. The model must chain two relationships transitively to arrive at the correct response. Recent research on multi-hop reasoning in knowledge graphs, including geometric embedding approaches, has shown that preserving the ordered structure of transitive relationships significantly improves inference accuracy. The system doesn't just follow paths — it understands that transitive relations have a directionality and a composition rule that must be respected.
But AI systems also demonstrate the failure modes of transitivity at scale. When a language model hallucinates, it is often performing transitive inference across relationships that do not support transitivity. The model has learned that A is associated with B and B is associated with C, so it produces a statement linking A to C — but the "associated with" relationship is not transitive, and the generated claim is false. The model cannot distinguish between relationships where transitivity holds and relationships where it does not, because it lacks the explicit type-checking that a well-designed knowledge graph provides.
This is a direct parallel to human cognition. When you chain relationships without checking whether the relationship type supports transitivity, you are doing exactly what a hallucinating language model does — following a path through a network and treating the endpoint as valid, without verifying that each hop preserves the relationship's integrity.
The lesson from AI systems is not that transitive reasoning is unreliable. It is that transitive reasoning requires explicit attention to relationship types. The systems that reason well — well-structured knowledge graphs with typed edges and formal transitivity rules — are the ones that check at each hop whether propagation is warranted. The systems that reason poorly — statistical models that treat all associations as potentially transitive — are the ones that produce confident nonsense.
Protocol: Testing transitive chains
When you encounter a chain of relationships and feel the pull of a transitive inference, run this protocol before acting on it.
-
Name the chain. Write it down explicitly: A relates to B (relationship type 1), B relates to C (relationship type 2), and I am inferring that A relates to C (relationship type 3). If you cannot write it down, the inference is too vague to trust.
-
Check type consistency. Are all the relationships in the chain the same type? "Reports to" at every hop is consistent. "Reports to" at one hop and "agrees with" at the next is not. If the types shift, your transitive inference is crossing a boundary it should not cross.
-
Estimate decay. Even when the type is consistent, how much does the relationship degrade at each hop? Trust, authority, and information all lose fidelity as they propagate. If you are three or more hops from the source, ask whether the signal is still strong enough to act on.
-
Test the endpoints directly. If the transitive inference matters — if you are making a decision based on it — verify the A-C relationship directly rather than relying on the chain. Call the person. Check the source. Test the claim. The transitive chain tells you where to look. It does not tell you what you will find.
-
Mark the intermediaries. Identify the nodes in the middle of the chain. These are your single points of failure. If B disappears — leaves the organization, goes offline, becomes unreliable — the A-C inference collapses. Knowing your intermediaries lets you assess the fragility of your transitive connections.
-
Decide: act on the inference, or invest in a direct link. Some transitive chains are strong enough to act on immediately. Others are worth only as hypotheses that need direct verification. And some are so degraded that you should ignore the chain entirely and build a direct relationship from scratch.
Transitive relationships are among the most powerful structural features of any network. They let you leverage indirect connections, infer hidden structure, and reach conclusions that no single relationship could support on its own. But that power comes with a specific risk: the temptation to see propagation where none exists, to treat every chain as a pipeline, to assume that because a path exists, the endpoint relationship is valid.
The discipline is not to avoid transitive reasoning. It is to practice it with precision — checking types, estimating decay, and verifying endpoints.
You now understand that relationships propagate through chains. But what happens when a critical node in one of those chains fails? If your only connection to an important resource is a single transitive path, and one link breaks, the connection disappears entirely. That fragility is the subject of the next lesson: redundant relationships provide resilience (L-0257). You will learn that the most robust systems are not the ones with the shortest paths, but the ones with multiple paths — so that when one chain breaks, another holds.
Sources
-
Milgram, S. (1967). "The Small World Problem." Psychology Today, 2, 60-67. Original small-world experiment establishing approximately six degrees of separation in social networks.
-
Warshall, S. (1962). "A Theorem on Boolean Matrices." Journal of the ACM, 9(1), 11-12. Original publication of the transitive closure algorithm for directed graphs.
-
Richters, O. & Peixoto, T.P. (2011). "Trust Transitivity in Social Networks." PLOS ONE, 6(4): e18384. Analysis of trust propagation metrics and the requirement for absolute trust in large networks.
-
Page, L., Brin, S., Motwani, R. & Winograd, T. (1999). "The PageRank Citation Ranking: Bringing Order to the Web." Stanford InfoLab Technical Report. Foundation of Google's search ranking via recursive transitive authority propagation.
-
Lee, H.L., Padmanabhan, V. & Whang, S. (1997). "The Bullwhip Effect in Supply Chains." Sloan Management Review, 38(3), 93-102. Documentation of demand signal amplification through transitive supply chain relationships.
-
Bryant, P.E. & Trabasso, T. (1971). "Transitive Inferences and Memory in Young Children." Nature, 232, 456-458. Demonstration that transitive inference capacity appears in children younger than Piaget predicted.
-
Lazareva, O.F. (2025). "Transitive Inference and Transitivity: Two Sides of the Same Coin?" Journal of the Experimental Analysis of Behavior. Contemporary review of transitive inference across species and cognitive mechanisms.