Not all links are created equal
You have two notes in your system. One is about sleep deprivation. The other is about decision fatigue. You draw a link between them. The link exists. But what does it mean?
If the link is untyped — just a line connecting two nodes — you know exactly one thing: someone thought these two ideas were related. You do not know how they are related. You do not know which direction the relationship runs. You do not know whether one causes the other, contradicts the other, extends the other, or merely co-occurred in the same article.
Now label the link: causes. Sleep deprivation causes decision fatigue. In two characters of metadata — a single word on the edge — you have transformed a vague association into a directional, falsifiable claim. You can traverse the graph and ask: what else causes decision fatigue? What does sleep deprivation cause besides decision fatigue? What contradicts the claim that sleep deprivation causes it?
The previous lesson, L-0344, established that links between ideas are first-class citizens — they deserve as much attention as the nodes themselves. This lesson adds a sharper claim: a link without a type is a half-built bridge. It connects two shores without telling you what can cross it.
The information gap between "related" and "causes"
Information theory gives us a precise way to measure this difference. Claude Shannon's 1948 framework defines information as the reduction of uncertainty. When you encounter a link labeled "related," your uncertainty about the nature of the connection barely decreases. The word "related" is nearly vacuous — it eliminates almost nothing from the space of possible relationships. The two nodes could be synonymous, contradictory, causally connected, historically concurrent, or members of the same category.
When you encounter a link labeled "causes," your uncertainty drops dramatically. You now know the relationship is directional (A produces B, not B produces A). You know it is causal rather than correlational. You know it is assertive — someone is making a claim, not just filing a cross-reference. A single typed label can eliminate 80-90% of the ambiguity that an untyped link preserves.
This is not just a theoretical observation. It has practical consequences every time you try to reason with your notes. When you traverse an untyped link, you must reconstruct the relationship from context — re-reading both notes, remembering why you linked them, inferring the nature of the connection. That reconstruction costs time and is error-prone. When you traverse a typed link, the relationship is already explicit. Your future self does not have to guess. Neither does anyone else who reads your graph. And neither does an AI system that operates on it.
The semantic web proved this at planetary scale
Tim Berners-Lee understood this problem from the beginning. When he invented the World Wide Web in 1989, he built it on untyped links — hyperlinks that say "this page points to that page" without saying why. The web worked. But it worked only for humans, who could read the surrounding text and infer the relationship. Machines could follow links but could not understand them.
In 2001, Berners-Lee, Hendler, and Lassila published "The Semantic Web" in Scientific American, proposing a fundamental upgrade: give every link a type. The mechanism was RDF — the Resource Description Framework — which represents all knowledge as subject-predicate-object triples. The predicate is the typed link: not just "Tim Berners-Lee → MIT" but "Tim Berners-Lee worksFor MIT." Not just "aspirin → headache" but "aspirin treats headache."
The triple structure is deceptively simple. Subject, predicate, object. But the predicate — the typed link — is where the information lives. Without it, you have an association. With it, you have a machine-readable claim that supports inference. If aspirin treats headache, and ibuprofen treats headache, a reasoner can infer that aspirin and ibuprofen belong to a shared category without anyone explicitly stating it.
The W3C Web Ontology Language (OWL) extended this further, allowing formal definitions of relationship types — their domains, ranges, symmetry, transitivity, and inverse relations. OWL made it possible to declare that "causes" is asymmetric (if A causes B, B does not cause A), that "partOf" is transitive (if A is part of B and B is part of C, then A is part of C), and that "contradicts" is symmetric (if A contradicts B, then B contradicts A). These are not just labels. They are logical constraints that allow automated reasoning over the graph.
The semantic web did not achieve its full consumer-facing vision, but its core insight — that typed links enable machine reasoning in ways untyped links cannot — became the foundation of every modern knowledge graph. Google's Knowledge Graph, launched in 2012, uses typed relations drawn from schema.org vocabulary to understand that "Einstein birthPlace Ulm" and "Einstein field Physics" are different kinds of facts requiring different kinds of inference. Wikidata, the structured knowledge base behind Wikipedia, encodes over 100 million items connected by typed properties — "instance of," "subclass of," "part of," "cause of," "occupation," "located in." Every property is a typed link. Remove the types and you have a useless hairball of connected nodes.
John Sowa and the formalization of typed relations
The academic roots run deeper than the semantic web. In 1976, John Sowa published his theory of conceptual graphs — a knowledge representation framework that combined semantic networks with the quantifiers of predicate calculus. Sowa's key innovation was labeling the links between concepts with typed relations drawn from linguistics: agent, patient, instrument, cause, purpose, location.
The distinction between Sowa's conceptual graphs and earlier semantic networks is instructive. Early semantic networks — the kind that appeared in AI research in the 1960s and 70s — often used untyped or loosely typed links. A node for "bird" might connect to a node for "fly" with a generic "is related to" link. This was fine for simple retrieval but collapsed under reasoning. Does the link mean "birds can fly," "birds are defined by flying," "birds are often seen flying," or "the word 'bird' frequently co-occurs with the word 'fly'"? Without a type, the system cannot distinguish these very different claims.
Sowa's typed relations solved this by making the nature of every connection explicit and formal. A conceptual graph could express "a bird [agent] flies [action] with wings [instrument]" — and a reasoner could distinguish this from "a bird [patient] is eaten [action] by a cat [agent]." The types are the semantics. Without them, you have syntax — nodes and lines — but no meaning.
The lesson generalizes beyond formal AI. Every time you create a link in a personal note system, you face the same choice Sowa formalized: do you capture what the relationship is, or do you merely record that a relationship exists?
What a typed edge vocabulary looks like in practice
You do not need OWL or RDF to benefit from typed links. You need a small vocabulary of relationship types that matches how you actually think. Here is an example — the five edge types used in the Completions knowledge graph, the very system this lesson is part of:
| Edge type | Meaning | Example | | --------------- | ---------------------------------------------------- | ------------------------------------------------------------------- | | enables | A is a prerequisite for understanding or doing B | "Nodes and edges" enables "Typed links" | | supports | A provides evidence, context, or reinforcement for B | "Shannon information theory" supports "Typed links carry more info" | | extends | A builds on B, adding depth or scope | "OWL ontologies" extends "RDF triples" | | contradicts | A is in tension with or directly opposes B | "Flat tagging" contradicts "Hierarchical classification" | | exemplifies | A is a concrete instance or illustration of B | "Wikidata typed properties" exemplifies "Typed links" |
Five types. That is enough to capture the vast majority of relationships between ideas in a personal knowledge system. Each type tells you something the others cannot. If you know only that two nodes are linked, you know almost nothing. If you know the link is "contradicts," you know to hold both ideas in tension rather than merging them. If the link is "enables," you know there is a dependency ordering. If it is "exemplifies," you know one is abstract and the other is concrete.
The power is not in any single typed link. It is in the aggregate. When your graph has thousands of typed edges, you can ask structural questions that untyped graphs cannot answer: What are all the things that contradict this belief? What chain of enables relationships must I traverse to understand this advanced concept? What concrete examples (exemplifies) exist for this abstract principle?
Luhmann's Zettelkasten and the implicit type problem
Niklas Luhmann, the sociologist who maintained a 90,000-note Zettelkasten over four decades, used several distinct connection mechanisms that implicitly encode type information — though he never used the term "typed links."
His system included Folgezettel (sequential notes that branch like an outline: 1, 1a, 1b, 1b1), which encode a structural "extends" or "continues" relationship. It included direct cross-references between distant notes, which encode a "relates to" or "supports" relationship. And it included hub notes — what he called Strukturzettel — that serve as tables of contents for clusters of related ideas, encoding a "organizes" or "indexes" relationship.
The limitation of Luhmann's system is that the types were implicit. A cross-reference link between note 21/3a and note 57/2c told you that Luhmann saw a connection. It did not tell you what kind of connection. Was note 57/2c an example of the principle in 21/3a? A contradiction? An extension? You had to read both notes and infer. This worked for Luhmann, who held the entire system in long-term memory. It fails for anyone inheriting the system — and it fails for machine traversal.
Modern tools like Obsidian and Roam Research face the same limitation. Both support bidirectional linking — if A links to B, B knows about the link. But neither has native support for typed links. Every link in a standard Obsidian vault is semantically identical: "this note references that note." The Obsidian community has explicitly identified this as a gap. As one forum discussion put it: "One simple missed feature that could turn Obsidian into a full-scale Personal Knowledge Graph is typed Links."
The workaround — adding a label in the link text, like [[note|causes]] — is fragile and unsearchable. The proper solution is typed links as first-class metadata, which is what RDF got right and most personal knowledge tools still lack.
Why AI needs typed links even more than you do
When you traverse your own notes, you carry context. You remember why you linked two ideas. You can infer the relationship type from your memory of creating the link. An AI system has no such context. It sees nodes and edges. If the edges are untyped, the AI must guess the relationship — and guessing at scale produces errors that compound.
Knowledge graphs designed for AI reasoning — the kind used in GraphRAG systems, biomedical research, and enterprise knowledge management — are universally typed. Google's Knowledge Graph uses schema.org properties. Wikidata uses over 11,000 distinct property types. Biomedical ontologies like SNOMED CT define hundreds of relationship types (finding site, causative agent, associated morphology) that allow clinical decision systems to reason about patient conditions.
The pattern is consistent: the more automated the reasoning, the more critical the edge types become. A human can read two linked notes and infer the connection. An AI cannot — or rather, it can guess, but its guesses are unreliable without explicit type information. Typed links are not a convenience for machines. They are a prerequisite for machine reasoning.
This matters for your personal knowledge system because AI is increasingly how you will interact with your notes. When you ask an AI to find contradictions in your beliefs, it needs "contradicts" edges. When you ask it to trace the causal chain behind a problem, it needs "causes" edges. When you ask it to suggest what to learn next, it needs "enables" edges. Without typed links, your AI assistant is working with an association map. With typed links, it is working with a reasoning substrate.
The cost of typing links (and why it is worth paying)
There is a real cost to maintaining typed links. Every time you create a connection, you must choose a type. That choice requires you to think about the nature of the relationship — which is slower than drawing an untyped link and moving on.
This cost is the point. The cognitive work of choosing a type forces you to understand the relationship more precisely. "Are these two ideas related?" is a low-bar question that almost any two ideas can pass. "Does this idea cause that one, contradict it, support it, or extend it?" is a higher-bar question that demands actual understanding of both ideas and the nature of their connection.
The choice sometimes reveals that you do not understand the relationship. You linked two notes because they appeared in the same article, but when forced to name the relationship, you realize there is no meaningful one — just co-occurrence. That is valuable information. It tells you the link is noise, not signal. In an untyped system, that noise accumulates silently. In a typed system, it is caught at creation time.
The failure mode is over-engineering the type system. If you define thirty relationship types with elaborate subcategories and formal definitions, you will spend your time classifying instead of thinking. Five to seven types cover the vast majority of knowledge relationships. Start with the five listed above — enables, supports, extends, contradicts, exemplifies — and add types only when you encounter relationships that genuinely resist classification with the existing set. If you have not needed a new type after a month of active use, your taxonomy is probably complete.
From typed links to traversal
This lesson establishes that the label on a link — its type — carries most of the link's informational value. An untyped link says two things are connected. A typed link says how they are connected — and that "how" is what makes a graph navigable, queryable, and useful for reasoning.
But a typed link between two nodes is still a local fact. The real power of typed links emerges when you traverse chains of them. If A enables B and B enables C, you have a dependency chain — a learning sequence. If A causes B and B causes C, you have a causal chain — an explanation. If A contradicts B and B supports C, you have a dialectical structure — a debate.
The next lesson — L-0346, Bidirectional awareness — examines what happens when links flow in both directions. When A links to B, does B know about it? Bidirectional awareness combined with typed links creates a graph where every node knows not just what it connects to, but how and from where. That combination is what transforms a collection of notes into a thinking tool.