A quote is not a thought. It is a fragment of one.
In 2005, linguist Matthew McGlone coined the term contextomy — the selective excerpting of words from their original linguistic context in a way that distorts the source's intended meaning. His research demonstrated something unsettling: a contextomized quotation does not merely prompt audiences to form a false impression of the source's intentions. It contaminates subsequent interpretation of the quote even when it is restored to its original context (McGlone, 2005). Once you have seen the decontextualized version, you cannot fully unsee it.
McGlone tested this by showing participants fabricated quotes about affirmative action that were strategically excerpted from neutral paragraphs. Depending on which fragment was selected, the same source could be portrayed as either favoring or opposing the policy. Participants formed confident judgments about the speaker's position — judgments that were entirely artifacts of what had been removed.
This is not a quirk of political rhetoric. It is the default condition of nearly every piece of information you encounter. The quote in a headline. The statistic in a tweet. The metric in a dashboard. The finding in an abstract. Each arrives having been separated from its original context, and that separation is not neutral. Every act of decontextualization is an act of meaning transformation.
The mechanism: what context actually carries
Context is not decoration added around the "real" information. Context is part of the information. It carries at least four irreplaceable elements:
Intent. Why the statement was made, to whom, and for what purpose. "We should consider layoffs" in a brainstorming session means something fundamentally different from the same words in a board resolution.
Scope. The boundaries of what was being discussed. A study finding that "meditation reduces anxiety" might have measured a specific population (college students during exams) using a specific intervention (20 minutes of guided breathing) over a specific duration (8 weeks). Remove the scope and the finding appears universal.
Contrast. What was measured alongside the claim. A company reporting "30% revenue growth" sounds impressive until you learn the industry average was 85%. The number alone is meaningless without its reference class.
Sequence. What came before and after. A politician's statement that "we should defund the program" reads very differently when the next sentence was "...and redirect those resources to a more effective version of it."
Strip any of these away and you do not get a simpler version of the truth. You get a different truth entirely — or no truth at all.
Statistics without context: the oldest trick in epistemology
Darrell Huff demonstrated this in 1954 with How to Lie with Statistics, a book that remains the best-selling statistics text of the twentieth century precisely because its central lesson never expires. Huff showed that data separated from its collection method, sample characteristics, and comparison baseline becomes infinitely malleable. "The secret language of statistics," he wrote, "so appealing in a fact-minded culture, is employed to sensationalize, inflate, confuse, and oversimplify" (Huff, 1954).
The most elegant demonstration of context-dependent statistics is Simpson's paradox. In 1973, UC Berkeley's graduate admissions data showed that 44% of male applicants were admitted compared to only 35% of female applicants — an apparent case of gender discrimination. But when researchers disaggregated the data by department, the bias reversed: in four of six departments examined, women were more likely to be admitted than men. The confounding variable was that women disproportionately applied to departments with lower overall admission rates (Bickel, Hammel & O'Connell, 1975).
The aggregate statistic — "Berkeley admits fewer women" — was not false. It was decontextualized. And decontextualized data does not just lose precision. It inverts meaning. The same numbers, depending on whether you include the departmental context, prove opposite conclusions.
You encounter this pattern constantly. A company's churn rate means nothing without knowing the industry baseline. A team's velocity means nothing without knowing what changed in scope. A country's GDP growth means nothing without knowing its debt trajectory. The number is never the information. The number plus its context is the information.
The replication crisis: what happens when science loses context
The most expensive demonstration of context loss in modern history is the replication crisis in psychology. In 2015, the Open Science Collaboration attempted to replicate 100 studies published in three leading psychology journals. The results were stark: while 97% of original studies reported statistically significant effects, only 36% of replications achieved significance. The mean effect size of replicated findings was half the magnitude of the originals (Open Science Collaboration, 2015).
Multiple factors contributed, but one of the most underexamined is context loss. Scientific findings are produced within specific experimental contexts — particular populations, lab environments, cultural moments, experimenter interactions, and procedural details that often go unrecorded or underspecified. When a different team attempts replication, they reproduce the procedure but not the context. And the context, it turns out, was carrying more of the effect than anyone realized.
This is not a failure of science. It is a demonstration of an epistemic principle: information that appears context-independent rarely is. The original researchers did not deliberately strip context. They simply assumed — as we all do — that the finding was separable from the conditions that produced it. That assumption is the root of decontextualization.
Writing itself is a decontextualization technology
Walter Ong, in Orality and Literacy (1982), argued that writing is fundamentally a technology of decontextualization. In oral cultures, knowledge exists only in the context of its telling — embedded in the speaker's tone, the audience's responses, the shared situation. A proverb spoken by an elder at a specific moment to a specific person carries the full weight of that context. The same proverb written in a book floats free.
Ong wrote that writing "separates the knower from the known" and "fosters abstractions that disengage knowledge from the arena where human beings struggle with one another." In oral traditions, knowledge is always situated — connected to people, places, relationships, and immediate needs. Writing turns situated knowledge into portable knowledge, which is enormously powerful. But portability comes at a cost: the context that originally gave the knowledge its meaning must be reconstructed by every new reader, in every new setting, often without adequate information to do so.
This is not an argument against writing. It is a recognition that every medium of knowledge transmission has a characteristic form of information loss, and writing's characteristic loss is context. Every document you read — this one included — has been separated from the conditions of its creation. The author's intent, the surrounding conversation, the specific problem being addressed, the alternatives that were considered and rejected: all absent. You reconstruct what you can. You fill in what you cannot. And the gap between the original context and your reconstruction is where misunderstanding lives.
Organizational knowledge decay: context as institutional memory
Nonaka and Takeuchi's SECI model of knowledge management (1995) identified a critical vulnerability in how organizations handle knowledge. Tacit knowledge — the kind embedded in experience, judgment, and situational awareness — is inherently context-specific. It exists in the practitioner, not in the document.
When organizations attempt to convert tacit knowledge to explicit knowledge (what Nonaka called "externalization"), they inevitably strip context. The expert's written procedure captures what to do but not why, not when this approach fails, not what the environment looked like when this solution was developed. Over time, the documented procedure persists while the context of its creation is forgotten. Teams follow processes without understanding their rationale. They apply solutions designed for conditions that no longer exist. They lose the ability to adapt because they never received the context that would tell them when the documented approach stops working.
This is why institutional knowledge does not merely decay — it misleads. A decontextualized process is worse than no process, because it carries the authority of documentation without the wisdom of understanding.
AI hallucination: decontextualization at industrial scale
Large language models represent the most powerful decontextualization engines ever built. During training, an LLM ingests billions of text fragments, each separated from its source, author, date, purpose, and surrounding discourse. The model learns statistical associations between tokens — patterns of what words tend to follow other words — without access to the contexts that gave those words their meaning.
When an LLM generates text, it produces sequences that are statistically plausible given its training distribution. This is why hallucinations are not bugs but structural features of the architecture. The model generates "a study by researchers at Stanford found that..." not because it has access to such a study, but because that pattern of words frequently preceded factual claims in its training data. The context — whether such a study exists, what it actually found, what its limitations were — was never part of the model's representation. It was stripped during training and cannot be reconstructed during generation.
Research on LLM hallucination consistently finds that the problem intensifies when models encounter topics where their training data was sparse, contradictory, or heavily decontextualized. The model cannot distinguish between a claim that appeared in a peer-reviewed paper and the same claim that appeared in a blog post criticizing it, because both are just token sequences separated from their contexts of production (Huang et al., 2023).
This matters for your epistemic practice because AI-generated text feels contextually rich. It arrives in complete sentences with apparent citations and confident framing. But that feeling of contextual richness is itself a statistical artifact — the model has learned that authoritative text has certain structural features, and it reproduces those features without the underlying epistemic substance.
The Third Brain application
Your external knowledge system — your notes, documents, databases, and AI tools — is a context-preservation system or it is nothing. Every note you take that records what without recording why, when, and under what conditions is a future decontextualization waiting to happen.
Practical implications:
When capturing information, capture context metadata. For every claim, finding, or decision you record, include: the source, the date, the purpose, and at least one sentence about the conditions under which it was produced. This takes ten seconds and prevents context decay that would otherwise take hours to reverse.
When retrieving old notes, reconstruct before applying. Your note from eighteen months ago about "always use microservices for new projects" was written in a specific context — perhaps a team of twelve working on a high-traffic application. Before applying that principle to a team of three building a prototype, ask what context produced the original note and whether that context still applies.
When using AI outputs, treat every claim as decontextualized by default. An LLM's response has been separated from every context that might validate or invalidate it. Verify claims independently. Ask the model to provide its reasoning chain. Cross-reference with primary sources. The convenience of AI-generated text is real; the context loss is also real.
When sharing information, include context explicitly. When you forward a statistic, attach its source and scope. When you quote someone, include the surrounding sentences. When you share a decision, document the alternatives you considered. You are not just being thorough — you are preserving meaning.
The context-check protocol
Before acting on any piece of information, run this check:
- Source: Who produced this? What were their incentives?
- Scope: What population, time period, or conditions does this apply to?
- Contrast: What was this being compared to? What is the baseline?
- Sequence: What came before and after this in the original source?
- Decay: How many times has this been re-shared or re-summarized since its origin?
If you can answer fewer than three of these questions, the information is too decontextualized to act on. You do not have information — you have a fragment that might mean anything.
Why this matters for everything that follows
In L-0176, you learned to identify which context is primary when multiple contexts are active. This lesson addresses the prior failure: information arrives already stripped of its context, and you must recognize that condition before you can do anything about it.
The next lesson — L-0178, Reconstruct context before making judgments — gives you the operational protocol for what to do once you have identified context loss. But that protocol is useless if you do not first develop the perceptual skill this lesson teaches: seeing decontextualized information as decontextualized, rather than as complete.
Every quote, statistic, finding, metric, principle, and AI-generated claim you encounter has been separated from the conditions of its production. That separation is not always malicious. It is usually structural — a consequence of how information moves through media, organizations, and minds. But structural or not, the result is the same: loss of context is loss of meaning. The information may still be true. But you cannot know that until you restore what was removed.
References:
- Bickel, P. J., Hammel, E. A., & O'Connell, J. W. (1975). Sex bias in graduate admissions: Data from Berkeley. Science, 187(4175), 398-404.
- Huang, L. et al. (2023). A survey on hallucination in large language models. arXiv preprint arXiv:2311.05232.
- Huff, D. (1954). How to Lie with Statistics. W. W. Norton.
- McGlone, M. S. (2005). Contextomy: The art of quoting out of context. Media, Culture & Society, 27(4), 511-522.
- Nonaka, I. & Takeuchi, H. (1995). The Knowledge-Creating Company. Oxford University Press.
- Ong, W. J. (1982). Orality and Literacy: The Technologizing of the Word. Methuen.
- Open Science Collaboration. (2015). Estimating the reproducibility of psychological science. Science, 349(6251), aac4716.