You have a graveyard of highlights you can no longer use
Open your read-later app, your bookmarks, your highlights export. Scroll back three months. Pick any item at random and read it cold.
You see the words. You probably recognize the topic. But can you answer the three questions that determine whether a captured note has any practical value? Why did you save this? What were you working on when it caught your attention? What were you going to do with it?
If you are like most people, the answer to all three is some variation of "I have no idea."
This is not a memory problem. It is a capture design problem. You recorded the content — the quote, the statistic, the idea — but you left the context behind. The context lived in your working memory at the moment of capture: the article you were reading, the project deadline pressing on you, the connection you saw to something else entirely. None of that made it into the note. And context, unlike content, does not wait. It begins decaying the instant you close the tab.
In context belongs with the atom, you learned the principle: an atomic note should carry enough context to be understood without its original source. That lesson established the why. This lesson is about the when and how — specifically, how to embed context into your capture workflow so that every note arrives in your system already equipped to survive on its own.
The science of context-dependent memory
Endel Tulving and Donald Thomson formalized this problem in 1973 with the encoding specificity principle: a retrieval cue is effective only if it was encoded alongside the original memory. The conditions of recall must overlap with the conditions of learning, or retrieval degrades. This is not a soft suggestion about study habits. It is a structural constraint on how human memory works.
Godden and Baddeley's 1975 diving experiment made the principle visceral. Divers who learned a word list underwater recalled roughly 40% fewer words when tested on dry land than divers tested in the same underwater environment. Same words, same cognitive capacity, same elapsed time. The only variable was context mismatch between encoding and retrieval.
Your capture workflow creates exactly this mismatch by design. You encode an idea in one context — reading a specific article, in a specific mental state, pursuing a specific problem — and then attempt to retrieve it weeks or months later in a completely different context. Every contextual cue that made the idea feel vivid and obvious at capture time is absent at retrieval time. The bare note is asking your memory to do precisely what Tulving proved it cannot: reconstruct meaning from mismatched conditions.
Hermann Ebbinghaus established the forgetting curve in the 1880s, showing that roughly 50-70% of new information is lost within 24 hours without reinforcement. But the loss is not uniform. What fades fastest is not the content itself — you may still recognize a quote months later — but the surrounding context: why it mattered, what triggered it, how it connected to your thinking. Context is the first casualty of forgetting. And context is the only thing that makes a captured note actionable rather than merely familiar.
The implication is direct: if you do not record context at capture time, you are relying on the most perishable component of memory to persist through the longest delay. That is a structural bet against yourself.
What Luhmann encoded on every card
Niklas Luhmann maintained over 90,000 index cards across 40 years and published more than 70 books. When people study his Zettelkasten, they fixate on the atomic format — one idea per card. What they overlook is the context machinery that made each card functional.
Every card in Luhmann's system carried three layers of context. First, bibliographic provenance: Luhmann maintained a separate bibliographic slip-box where each literature note recorded the source, with back-of-card references like "on page x is this, on page y is that." When he wrote a permanent note for his main box, the source trail was preserved. Second, a unique alphanumeric index that positioned the card within a branching hierarchy of related ideas — not a topical filing system, but a structural address that encoded where this idea lived in relation to other ideas. Third, explicit cross-references to other cards, stating not just that a connection existed but creating a navigable path to other contexts where the idea was relevant.
Luhmann was explicit about why this mattered. He described making notes as a "transition from one context to another" — the context of the original source to the context of his own thinking. The card did not merely store the idea. It translated the idea from its source context into his working context, carrying enough metadata that the translation could be understood months or years later.
This is what most modern capture workflows miss. Highlighting a passage and dropping it into a read-later app is not capture in the Luhmannian sense. It is extraction without translation. The content crosses the boundary from source to system, but the context — why it was selected, what it connects to, what problem it addresses — stays behind on the other side.
Progressive context: Forte's layers
Tiago Forte's progressive summarization offers a different lens on the same problem. Forte's method works in layers: first capture the passage, then bold the key points, then highlight the most critical phrases, and finally write a brief executive summary in your own words. Each layer represents a deeper engagement with the material, and each layer adds context that the raw excerpt lacks.
The insight buried in Forte's method is that context does not need to arrive all at once. The initial capture can be lightweight — a highlight plus source and spark — and context can deepen progressively as you revisit the note. Layer 1 is raw capture with metadata. Layer 2, during a batch processing pass, adds emphasis on what matters most. Layer 3, when you encounter the note again in a different project, adds the cross-connection you could not have seen originally.
This resolves the tension between capture speed and context richness. You do not need to write a paragraph of context for every highlight — that kind of friction kills capture habits within days. You need a minimum viable context at capture time (source, spark, link) and a system that invites progressive enrichment over time. The initial 30 seconds of context-recording ensures the note survives. The later layers make it thrive.
Forte puts it well: the goal is designing notes that balance "context and discoverability" — notes rich enough to be useful but lean enough to be captured in the flow of real work. Progressive summarization is context accumulation spread across time, which is far more sustainable than demanding full context at the moment of capture.
Situated cognition: why stripped-down captures fail
Brown, Collins, and Duguid published "Situated Cognition and the Culture of Learning" in 1989, and their argument cuts directly to the heart of contextless capture. They demonstrated that knowledge is situated — it is a product of the activity, context, and culture in which it develops. Many educational practices assume that conceptual knowledge can be abstracted from situations and transferred as a decontextualized, self-sufficient package. Brown, Collins, and Duguid showed that this assumption is wrong.
Their analogy is tools. A saw can be defined in a dictionary, but you understand a saw by using it to cut a specific piece of wood for a specific purpose. Knowledge stripped of its situational context is like a tool in a museum display case — recognizable but inert. It has a label. It has no function.
This is exactly what happens when you capture an idea without recording the situation that gave it meaning. The highlight becomes a museum piece. You can read it, you can categorize it, you can even search for it. But you cannot use it because the situational context that made it actionable is gone. You captured the tool and left behind the workshop.
The remedy is not to carry the entire situation with you — that is impossible. It is to carry the minimum viable fragment of the situation: what you were doing when you encountered this (the activity), where it came from (the source), and why it mattered to the specific problem you were solving (the purpose). These three data points are enough to reconstruct the situated meaning even when the original context has long since dissolved.
Data provenance: the engineering parallel
Software engineering and data science formalized this exact principle decades ago under the concept of data provenance — the practice of recording where data came from, how it was collected, what transformations were applied, and what decisions shaped its current form.
In data engineering, data without provenance is considered unreliable at best and unusable at worst. You would never trust a dataset that arrived with no information about its source, collection method, or transformation history. The number might be correct, but without provenance you cannot verify it, cannot assess its relevance, and cannot debug it when something goes wrong downstream.
IBM's definition captures it precisely: data provenance is "information about the entities, activities, and agents involved in producing a piece of data" that "can be used to assess quality, reliability, and trustworthiness." Replace "data" with "note" and you have the entire argument of this lesson in one sentence. A note without provenance — without source, capture reason, and connection metadata — cannot be assessed for quality, reliability, or relevance. It is an untraceable data point in a system that depends on traceability.
Your notes are data. Your future self is the downstream consumer. Every capture without context is an undocumented data point entering a pipeline that will eventually fail to produce reliable outputs.
The capture-time context protocol
The research converges on a single practical implication: context must be recorded at capture time, not later. Not during a weekly review, not during a batch processing session, not "when you get around to it." At the moment of capture, while the situational context still lives in your working memory.
This is not because delayed context is bad. It is because delayed context rarely happens. The intention to "add context later" is one of the most reliable failure modes in personal knowledge management. You batch-process your inbox (as you learned in batch processing beats continuous processing), but by the time you reach a bare highlight from three days ago, the spark that made you capture it is already fading. You reconstruct something, but it is thinner, less specific, less useful than what you would have recorded in the original moment.
The protocol is simple and should take no more than 30 seconds:
Source. Where did this come from? A book title and page number, an article URL, a conversation with a named person, a meeting on a specific date. This is provenance — not for academic citation, but for future traceability.
Spark. Why did you capture this? What problem were you solving, what question were you pursuing, what surprised you about it? One sentence is enough. "Captured because this contradicts my assumption about team velocity" is a complete spark. The bare highlight without it tells your future self nothing.
Forward link. What does this connect to? One other note, one project, one open question. This is the minimum graph edge that prevents the note from becoming an orphan. It does not need to be profound. "Relates to the onboarding project" or "connects to my note on feedback loops" is sufficient.
Three fields. Thirty seconds. The difference between a note that compounds and one that rots.
Evergreen notes as the gold standard
Andy Matuschak's evergreen notes represent the fully realized version of this principle. Matuschak's notes are concept-oriented (organized around ideas, not around sources or events), atomic (one idea per note), and densely linked (each note connects explicitly to related notes). The concept-oriented property is the one that enforces context most ruthlessly.
When you organize by concept rather than by source, you are forced to articulate what the idea is in terms that are independent of where you found it. You cannot title a note "Chapter 3 key insight" and have it mean anything outside the source document. But "Retrieval degrades when encoding context is absent" carries its meaning into any context where you encounter it. The concept-oriented title is itself a form of captured context — it encodes the what in a way that does not depend on the where or when.
Matuschak is explicit that this practice is not about better note-taking. It is about better thinking. "Better note-taking misses the point; what matters is better thinking." And better thinking requires notes that function as self-contained units of meaning — not pointers back to some other document you may never reopen, but standalone claims that carry enough context to be engaged with immediately.
The capture-time context protocol (source, spark, link) is the minimum viable version of what Matuschak describes. You may not write a fully concept-oriented evergreen note every time you capture something — that friction would kill the habit. But you can ensure that every capture carries enough context to eventually become one, either through progressive enrichment or through a processing pass where you transform raw captures into evergreen form.
AI and the Third Brain: context is the input layer
If context matters for your own future recall, it matters exponentially more for AI-assisted thinking. When you feed a note to a large language model, the model has zero access to your memory. It cannot infer what you were reading, what project you were working on, or why this particular fragment caught your attention. It processes exactly and only what the note contains.
A bare highlight — "Judges granted parole at 65% after meals, near 0% before" — produces a generic summary of the Danziger study. Interesting, but not useful for your specific situation. The same highlight with context — "Source: Danziger et al. 2011 via Kahneman ch. 3. Spark: investigating whether our team's Friday 5pm launch decision was affected by decision fatigue. Connection: relates to batch processing timing and cognitive load management" — gives the model everything it needs to reason about your specific application. It can connect the research to your team dynamics, suggest related concepts from your knowledge graph, and help you develop the insight in a direction that matters to your work.
This is the Third Brain argument for context. Your first brain (biological memory) forgets context fastest. Your second brain (note system) preserves whatever you put in it. Your third brain (AI layer) amplifies whatever the second brain contains. Context-free notes produce generic AI outputs. Context-rich notes produce personalized, situationally relevant reasoning. The context you record in 30 seconds at capture time determines whether your AI interactions produce insight or noise for months afterward.
Research on fragmented AI context confirms this at scale. When information is scattered across prompts without cohesion, model accuracy drops dramatically — in some studies by 30% or more. Self-contained, context-rich notes produce better AI interactions for the same reason they produce better human recall: the meaning is present at the point of use, not distributed across some external context that neither the human nor the machine can access.
The 30-second habit that changes everything
The principle is simple: capture context, not just content. The practice is a 30-second addition to your existing capture workflow. The research — from Tulving's encoding specificity to Brown's situated cognition to modern data provenance standards — unanimously supports the same conclusion: content without context degrades into noise, and the only reliable time to capture context is the moment of encoding.
You already learned in context belongs with the atom that an atomic note must carry enough context to function independently. This lesson applies that principle to the specific moment where context is most available and most likely to be lost: the capture moment itself. Not during organization, not during review, not during some future processing pass. Now, while the spark is still alive.
Source. Spark. Forward link. Thirty seconds. Every time.
But what about the moments when even 30 seconds of typing is too much friction? When you are driving, walking, in a meeting, in a conversation — when the spark is live but your hands are not free? That is where context loss is most severe, because the ideas that arise in high-friction moments are often the most situationally rich and the hardest to reconstruct later. In the next lesson, voice capture for high-friction moments, you will learn how to preserve both content and context when typing is impossible — because the best capture protocol in the world is useless if you cannot execute it when it matters most.