Most of your old notes are unreadable, and you know it
Go open a note you wrote six months ago. Not a polished document — a quick capture, a highlight, a stray idea you jotted during a meeting or after reading something that felt important. Read it cold.
Odds are you can't tell what it means. You see the words. You might even recognize the topic. But you can't reconstruct why you wrote it, what you planned to do with it, or how it connects to anything else you know. The note is technically present in your system. It is functionally dead.
This is the most common failure in personal knowledge management, and it has nothing to do with choosing the wrong app or the wrong organizational method. It's a context problem. You captured the content but left the context behind — in your head, in the moment, in the source document you were reading. And context, unlike content, does not wait around for you to come back.
An atomic note should carry enough context to be understood without its original source. That's the principle. The rest of this lesson is about why it's true, what "enough context" actually means, and how to build the habit before your system fills up with orphaned fragments.
Your memory works against you here
Endel Tulving and Donald Thomson established the encoding specificity principle in 1973: a retrieval cue only works if it was present during encoding. In plain language, you can only remember something if the conditions of recall match the conditions of learning. The more the retrieval environment diverges from the encoding environment, the harder recall becomes.
Godden and Baddeley's famous 1975 diving experiment demonstrated this vividly. Divers who learned a list of 38 words underwater and then tried to recall them on dry land remembered roughly 40% fewer words than divers who recalled in the same environment where they learned. Same words, same people, same cognitive capacity — the only variable was context mismatch.
Now apply this to your notes. When you capture an idea while reading a particular book, in a particular mental state, at a particular stage of a project, you're encoding it with rich contextual cues — the surrounding argument, the problem you're trying to solve, the emotional resonance of the moment. All of that context lives in your working memory and in the environment. None of it lives in the note.
Three months later, when you revisit that note, every one of those contextual cues is gone. You're not reading that book. You're not in that mental state. You've moved on to different problems. The encoding environment has completely changed, and your bare, context-free note is asking your memory to do precisely what Tulving and Godden and Baddeley proved it cannot: retrieve meaning from mismatched conditions.
The solution isn't to improve your memory. It's to stop relying on it. The context needs to travel with the note.
What Luhmann and Matuschak got right
Niklas Luhmann maintained a Zettelkasten — a slip-box system — of over 90,000 cards across 40 years of prolific academic output. He published more than 70 books and nearly 400 scholarly articles, and he credited the system, not his own intellect, for much of that productivity.
What made Luhmann's cards work wasn't just that they were atomic — one idea per card. It was that each card carried its own context. Every card included the source of the information, a unique alphanumeric index that placed it within a branching hierarchy of related ideas, and explicit link references to other cards stating why those connections existed. When Luhmann pulled any single card from the box, he didn't need to remember where it came from or why it mattered. The card told him.
Andy Matuschak, building on this tradition, articulates the modern version of the same principle through his concept of evergreen notes. Matuschak's notes are concept-oriented (organized around ideas, not sources), atomic (each note captures a single self-contained unit), and densely linked (each note connects explicitly to related notes). The critical insight is in the "concept-oriented" property. When you organize notes by concept rather than by book or event, you're forced to carry context into the note itself. You can't write a note titled "Chapter 3 key insight" and have it mean anything outside the source. But a note titled "Retrieval degrades when encoding context is absent" carries its meaning with it.
Matuschak puts it directly: better note-taking misses the point; what matters is better thinking. And thinking requires notes that function as self-contained units of meaning — not pointers back to some other document you may never reopen.
The software engineering parallel: documentation that travels
Software engineers learned this lesson decades ago, and they learned it the hard way.
Martin Fowler, in his writing on code as documentation, argues that code is the primary documentation of a software system — but only if it's treated like documentation. The implication: a function, a module, or a class needs to be understandable by someone who has never seen the rest of the codebase. A function named processData() with no docstring, no type annotations, and no comments is the code equivalent of a contextless note. It exists. It does something. Nobody can tell what without reading ten other files.
The principle of self-documenting code is precisely the principle of self-contextualizing notes. A well-named function with clear parameters and a one-line docstring carries its context. A poorly documented function forces every future reader — including the original author six months later — to reconstruct context from the surrounding codebase. This is the same reconstruction tax you pay when you open a contextless note and try to figure out what past-you meant.
Data science has formalized this even further through the concept of data provenance — the practice of recording where data came from, how it was collected, what transformations were applied, and what decisions shaped its current form. Data without provenance is considered unreliable at best, unusable at worst. As one data engineering primer puts it: knowing only that someone was once a baby and is now an adult, with no information about their life in between, gives a useless understanding of a person. The same is true of a note without provenance. You know you wrote it and you know what it says, but you've lost the chain of reasoning that made it meaningful.
Your notes are data. They need provenance.
Situated cognition: why decontextualized knowledge fails
Brown, Collins, and Duguid published "Situated Cognition and the Culture of Learning" in 1989, and their core argument directly addresses why contextless notes fail. They demonstrated that knowledge is situated — it is a product of the activity, context, and culture in which it was developed. Many educational practices assume that conceptual knowledge can be abstracted from situations and transferred as a decontextualized package. Brown, Collins, and Duguid showed that this assumption is wrong.
Their analogy is tools: just as a carpenter's tool can only be fully understood through use in authentic situations, knowledge only has meaning within the context that gives it purpose. You can memorize the definition of a saw, but you understand a saw by using it to cut a particular piece of wood in a particular situation. Knowledge stripped of context is like a tool in a museum display case — recognizable but inert.
This is exactly what happens when you write a note that strips away the situated context. The note becomes a decontextualized fragment — a museum piece. It has a label, but it has no function. The challenge, then, is to make notes portable without losing their situated meaning. You can't carry the entire situation with you, but you can carry enough of it: the source, the problem you were solving, the connection to other ideas, the reason it mattered.
This is what "context belongs with the atom" means in practice. You're not trying to reproduce the entire situation. You're trying to capture the minimum viable context that lets the note function outside its original environment. Three fields usually suffice: where it came from (source), why it mattered (spark), and what it connects to (link).
AI can only work with what the note contains
If your notes serve only your own future recall, the context problem is bad. If your notes also serve as input to AI systems — as they increasingly do — the problem is catastrophic.
When you feed a note to a large language model, the model has no access to your memory. It cannot reconstruct what you were reading, what problem you were solving, or why you captured this particular fragment. It sees exactly and only what the note contains. If the note says "The map is not the territory," the model will produce a generic summary of Korzybski's general semantics. If the note says "Korzybski's 'map is not the territory' — captured while reviewing our team's architecture diagrams, which conflate the system model with the running system — relates to L-0004 (the observer is not the observed)," the model can do something useful: it can reason about your specific application, suggest related ideas from your own system, and help you develop the insight further.
Research from 2025 on AI context windows confirms this at scale. Studies found that when information is fragmented across multiple prompts — what researchers call "sharded prompts" — model performance drops by an average of 39%. One model fell from 98.1% accuracy to 64.1% simply because context was spread across turns rather than delivered in a single self-contained block. The mechanism is what researchers call context confusion: models make premature assumptions in early turns when information is incomplete, and these assumptions poison later reasoning.
Self-contained notes produce better AI interactions for the same reason they produce better human recall: the context is present at the point of use, not scattered across some external system that the reader — human or machine — cannot access.
This is one of the most practical arguments for contextual atoms. As AI becomes a standard component of knowledge work, every note you write is a potential prompt. Notes without context produce hallucinations. Notes with context produce insight.
The minimum viable context
None of this means every note needs a 500-word preamble. Overloading notes with context defeats the purpose of atomicity. The goal is minimum viable context — the smallest amount of metadata that lets the note function independently.
For most notes, three fields are enough:
Source. Where did this come from? A book title and chapter, a conversation with a specific person, an article URL, a meeting on a specific date. This isn't citation for academic rigor — it's provenance for future utility.
Spark. Why did you capture this? What problem were you solving, what question were you pursuing, what connection did you see? One sentence is enough. "Captured because this contradicts my assumption about team velocity" tells your future self everything. The bare highlight tells nothing.
Connection. What other idea does this relate to? A link to another note, a tag, a reference to a project. This is the minimum graph edge that prevents the note from becoming an island.
Luhmann did this with source references, hierarchical indices, and explicit link annotations. Matuschak does it with concept-oriented titles, dense bidirectional links, and notes that are written as freestanding arguments. The specific format matters less than the discipline: every atom carries enough context to survive outside its source.
From separation to self-sufficiency
In the previous lesson, you learned to separate observations from interpretations — to distinguish what you saw from what you concluded. That separation is necessary but not sufficient. A cleanly separated observation that lacks context is still an orphan. "Conversion rate dropped 12% this week" is a clean observation. But without context — which product? measured against what baseline? during what campaign? — it's useless three months later.
Context is what transforms a separated, atomic note from a fragment into a building block. And building blocks are what you need, because the next lesson addresses the question that naturally follows: how finely should you decompose? In Granularity is a choice, not a discovery, you'll learn that the level of detail you capture is a decision driven by purpose — not an inherent property of the idea itself. But that choice only works if each unit, at whatever granularity you choose, carries its own context. Otherwise you're just choosing how small to make your orphans.
Context belongs with the atom. Not in your memory, not in the source document, not in some future review session you'll never get to. With the atom, where it can do its work.