Everything decays unless you maintain it
Your garden doesn't stay weeded because you weeded it once. Your codebase doesn't stay clean because you refactored it last quarter. Your apartment doesn't stay organized because you tidied it in January. Every ordered system in the physical world drifts toward disorder unless energy is continuously applied to counteract that drift.
The second law of thermodynamics formalizes this: in any closed system, entropy — the measure of disorder — either increases or stays the same. It never spontaneously decreases. Reducing disorder requires work. Always. Without exception.
Your knowledge graph is not exempt. L-0355 established that your graph grows by accretion — you add nodes and edges daily, and the graph becomes more powerful over time. That lesson told you how graphs grow. This lesson tells you what happens when growth occurs without maintenance: the graph rots.
Link rot is not a metaphor
The internet provides the largest-scale evidence of what happens to connected systems without maintenance. A 2024 Pew Research Center study found that 38% of web pages from 2013 were completely inaccessible a decade later. Even recent content decayed: 8% of pages from 2023 had already broken within a single year. Researchers found at least one broken link in 54% of Wikipedia reference sections, 23% of news webpages, and 21% of government websites.
An Ahrefs study put the number even higher: 66.5% of links to sites over the previous nine years were dead. Close to 44% of backlinks disappeared within seven years.
These aren't obscure corners of the internet. These are Wikipedia, major news outlets, and government sites — organizations with dedicated teams and institutional budgets for content maintenance. If they can't prevent link rot, your personal knowledge graph won't prevent it either. Not through good intentions. Only through regular maintenance.
The mechanism is the same in every case. The world changes. The links don't update themselves. A node that once pointed to a valid concept now points to something obsolete, renamed, or deleted. The connection was true when you made it. It stopped being true while you weren't looking.
Software knows this. Knowledge management forgot it.
In 1980, computer scientist Meir Lehman published his laws of software evolution — empirical observations drawn from decades of studying how real software systems change over time. Two laws are directly relevant:
The Law of Continuing Change: A program that is used in a real-world environment must be continually adapted, or it becomes progressively less satisfactory. Change is not optional. A system that isn't changing to keep up with changes in the real world is decaying — even if no one touches the code.
The Law of Increasing Complexity: As a system evolves, its complexity increases unless explicit work is done to maintain or reduce it. Every change adds complexity. Without deliberate effort to counteract that accumulation, the system becomes progressively harder to understand and modify.
Software engineers learned these lessons the hard way. Across 160 companies and 365 million lines of code, technical debt averages $3.61 per line. Database administrators know that index fragmentation above 30% requires a full rebuild — not because the data is wrong, but because the structure that organizes the data has degraded through normal use. SQL Server documentation recommends reorganizing indexes at 11-30% fragmentation and rebuilding entirely above 30%. The data hasn't changed. The organization of the data has decayed.
Your knowledge graph has the same properties. Every time you add a node without checking its relationship to existing nodes, you risk creating duplicates. Every time your understanding of a concept evolves but the node description stays frozen, the graph drifts from your actual thinking. Every time you delete or merge two concepts in your mind but leave both nodes in the graph, you introduce a structural lie.
The graph doesn't alert you. It just gets less trustworthy. And you stop consulting it. And then it dies — not because it was wrong about the world, but because it was wrong about you.
The three failure modes of unmaintained graphs
Graph decay manifests in three specific ways, each with different symptoms and different fixes.
Dead links. A connection that was true when created but is no longer accurate. You linked "microservices" to "team autonomy" because that was your understanding six months ago. Since then, you've watched three teams struggle with distributed system complexity, and your understanding has shifted. The link still exists. It now encodes a belief you no longer hold.
Dead links are the most insidious form of decay because they look exactly like valid links. There is no visual difference between a connection that reflects your current understanding and one that reflects understanding you've since abandoned. You can only find dead links by reviewing them — by walking each connection and asking, "Do I still believe this?"
Missing connections. Two nodes that should be linked but aren't — usually because you learned something new after creating both nodes and never went back to connect them. Your graph has a node for "sunk cost fallacy" and a node for "technical debt prioritization," but you never drew the edge between them because you learned about sunk costs in a psychology context and technical debt in an engineering context. The connection is obvious once you see both nodes side by side. But you've never seen them side by side because the graph doesn't know to show them together.
Missing connections are the gap between what your graph knows and what you know. They accumulate silently — every new insight that relates to existing nodes but never gets linked creates another invisible gap.
Orphan nodes. Nodes that have lost all their connections, or that were created with minimal connections and never integrated. They sit in the graph, technically present, practically invisible. They're the digital equivalent of the book on your shelf that you bought, never read, and never gave away. It takes up space. It contributes nothing.
Each failure mode degrades the graph differently. Dead links make it misleading. Missing connections make it fragmented. Orphan nodes make it bloated. All three reduce the graph's primary value: being a trustworthy external representation of how you actually think.
David Allen already solved this problem
The practice of maintaining a knowledge system through regular review isn't new. David Allen's Getting Things Done methodology, developed in the early 2000s, identified the weekly review as the single most critical practice in the entire system. Allen described its purpose as "whatever you need to do to get your head empty again" — and he structured it in three phases: Get Clear (collect loose ends), Get Current (review all active commitments), and Get Creative (activate dormant projects and generate new possibilities).
Allen's insight was that the review is not optional. It is the practice that makes the system trustworthy. Without it, your lists become stale, your projects become outdated, and you stop trusting the system — which means you stop using it — which means you revert to keeping everything in your head.
The parallel to graph maintenance is exact. Your graph, like Allen's task system, degrades continuously. The world changes, your understanding changes, and the graph stays frozen at the moment of last edit. The weekly review is the practice that keeps the system synchronized with reality. Without it, the system is a historical artifact — a record of what you once thought, not a tool for what you currently think.
Sönke Ahrens, describing the Zettelkasten method in How to Take Smart Notes, makes the same point structurally. Luhmann's 90,000-note system worked not because he was disciplined about adding notes — though he was — but because every new note was an occasion to review existing connections. Each addition triggered a mini-review: what does this relate to? Are the existing connections still valid? What's missing? The maintenance was embedded in the growth process itself.
The gardening metaphor is literal, not decorative
The digital garden community — writers, thinkers, and knowledge workers who maintain publicly evolving note collections — adopted the gardening metaphor deliberately. Mike Caulfield, who popularized the term "digital garden" in 2015, contrasted it with the "stream" model of blogging: streams are chronological, append-only, and never revised. Gardens are spatial, iterative, and continuously tended.
The metaphor is more precise than it first appears:
- Weeding = removing dead links and obsolete nodes. A garden choked with weeds is technically full of life. It's just not the life you planted.
- Pruning = cutting back overgrown clusters. Some areas of your graph expand beyond their usefulness — you went deep on a topic for a project that ended, and now fifty nodes serve no ongoing purpose.
- Transplanting = moving nodes to better locations. A concept you filed under "psychology" turns out to be more useful in "decision-making." The node is correct. Its position is wrong.
- Fertilizing = adding missing connections. The richest soil in a garden is where different root systems intertwine. The richest areas of a graph are where different domains connect.
- Seasonal review = stepping back to assess the whole. A gardener doesn't just tend individual plants — they periodically assess the overall layout, identify areas that need restructuring, and plan for the next season.
The point is not that gardening is a pleasant metaphor. The point is that gardens die without maintenance. So do knowledge graphs. The only question is whether you build the maintenance into your practice or discover the decay after the graph has become useless.
A maintenance protocol that actually works
Abstract commitments to "review your graph regularly" fail for the same reason abstract commitments to "exercise more" fail — no cue, no specific behavior, no defined scope. Here is a concrete protocol.
Weekly: Walk one cluster.
Pick one domain, tag, or topic cluster in your graph. Not the whole graph — one section. Open every node in that cluster. For each node, ask three questions:
- Is this still accurate? (If not, update or archive it.)
- Are its connections still valid? (If not, remove the dead links.)
- Is it missing connections I now see? (If so, add them.)
Track your maintenance metrics: nodes updated, links removed, links added, nodes archived. Over time, these numbers tell you which areas of your graph decay fastest and need more frequent attention.
Monthly: Hunt for orphans and duplicates.
Sort your graph by connection count. Nodes with zero or one connection are candidates for either deeper integration or removal. Nodes with suspiciously similar names or descriptions are candidates for merging.
Search for concepts you've been thinking about recently that don't appear in the graph at all. These are the missing nodes — ideas that live in your head but haven't been externalized yet.
Quarterly: Assess the whole.
Step back and look at the graph's overall structure. Which domains have grown? Which have stagnated? Which connections between domains are strong, and which are conspicuously absent? This is the gardener's seasonal review — not tending individual plants but evaluating the layout.
The quarterly review is also when you question your categories. Domains that made sense six months ago may no longer reflect how you think. Tags that were useful may have become too broad or too narrow. The graph's organizational structure is itself subject to decay.
Maintenance is not overhead — it is the practice
There's a natural resistance to maintenance. It feels like overhead — time spent not learning, not creating, not making progress. The whole point of a knowledge graph is to capture and connect ideas. Reviewing old connections feels like going backward.
This is the same fallacy that makes engineers resist refactoring. "We should be building features, not cleaning up old code." But Lehman's Law of Increasing Complexity is unambiguous: without explicit maintenance work, complexity increases to the point where the system becomes unusable. The "overhead" of refactoring is what makes future feature development possible.
The same is true for your graph. Maintenance is not time stolen from learning. It is the practice that keeps your externalized thinking trustworthy. An unmaintained graph is worse than no graph at all — it gives you false confidence that your knowledge is organized when it's actually fragmenting.
Every node you update is a moment of genuine cognitive work. When you re-examine a connection and ask, "Do I still believe this?" — you are thinking. When you discover two nodes that should be linked and draw the edge between them, you are synthesizing. When you archive a node that no longer serves your understanding, you are exercising epistemic honesty.
Maintenance is not the price you pay for having a graph. Maintenance is the ongoing practice of keeping your external mind synchronized with your internal one.
The graph that's maintained is the graph that's trusted
Here is the ultimate test of whether your maintenance practice is working: do you consult your graph when you're thinking through a hard problem?
If yes — if you reach for your graph the way you reach for a search engine or a colleague — then the graph is trustworthy. It reflects your current thinking closely enough that consulting it produces value.
If no — if the graph sits there while you think in your head or start from scratch every time — then the graph has decayed past the trust threshold. You've stopped believing it reflects reality. And you were probably right to stop.
The difference between these two states is not the size of the graph, the sophistication of the tools, or the elegance of the taxonomy. It is the maintenance. A small, well-maintained graph that you trust and consult daily is infinitely more valuable than a massive, unmaintained graph that you built once and never touched again.
L-0355 showed you that your graph grows by accretion. This lesson shows you that growth without maintenance produces decay, not power. The next lesson, L-0357, shows you what a maintained graph makes possible: visualization that reveals structure your lists and outlines cannot show. But visualization only works on a graph you trust. And trust only comes from maintenance.
The entropy is constant. The maintenance must be too.