The question that separates experts from everyone else
In 1988, the USS Vincennes shot down Iran Air Flight 655, killing all 290 people aboard. The Aegis combat system — the most sophisticated radar platform in the world at the time — had correctly identified the aircraft. The data was on the screen. The crew had the information. What they did not have was the right context.
The ship was engaged in a surface battle with Iranian gunboats in the Strait of Hormuz. The crew was operating under combat conditions, with adrenaline high and attention narrowed. When a radar blip appeared climbing out of Bandar Abbas airport — which also served as a military airfield — the tactical team interpreted the contact through their current context: we are in a fight, and this is a threat. They reported the aircraft as descending and accelerating toward the ship, consistent with an attack profile. The aircraft was actually climbing and on a routine commercial flight path. The raw data said one thing. The context the crew had loaded said another. The context won (Fogarty, 1988).
This is not a story about incompetent people. The crew of the Vincennes included trained operators running the most advanced combat system ever built. It is a story about what happens when you do not explicitly ask "what context am I in?" before interpreting information. The data does not speak for itself. Context speaks for the data. And if you let context operate implicitly — if you never surface it, never name it, never question it — then whatever context your brain loaded most recently will do all the interpreting for you, and you will never notice it happening.
L-0161 established the principle: context determines meaning. This lesson gives you the operational practice. You do not just need to know that context shapes interpretation. You need a systematic habit of identifying your current context before you interpret anything important.
Situational awareness: the science of knowing where you are
The formal study of this problem began in aviation. In 1995, Mica Endsley published the foundational model of situational awareness (SA) that remains the standard framework across military, aviation, medical, and industrial domains. She defined SA as "the perception of the elements in the environment within a volume of time and space, the comprehension of their meaning, and the projection of their status in the near future" (Endsley, 1995).
Endsley's model has three levels, and they map precisely to the cognitive operations involved in identifying context.
Level 1 — Perception. You notice the elements present in your current situation. What is physically around you? What information just arrived? Who is in the room? What signals are present? This is raw intake — not interpretation, just detection. Most people assume they do this automatically. They do not. Studies of aviation accidents found that 76% of situational awareness errors occurred at Level 1 — the operator failed to perceive information that was available and relevant (Jones & Endsley, 1996). The data was there. They did not see it. Not because they were blind, but because their current context told their perceptual system what was worth noticing, and the relevant signals were not on the list.
Level 2 — Comprehension. You synthesize the perceived elements into a coherent understanding of what the situation means. This is where context does its heaviest work. The same set of perceived elements — a crowded room, raised voices, rapid speech — means something completely different in a surprise birthday party than in a workplace confrontation. Comprehension is not an objective readout of reality. It is pattern matching against the context you have loaded. If you loaded the wrong context, your comprehension will be internally consistent, subjectively confident, and objectively wrong.
Level 3 — Projection. You extrapolate forward to anticipate what will happen next. This is where contextual errors compound. If your comprehension is based on the wrong context, your projections will be systematically off, and you will be surprised by outcomes that were perfectly predictable under the correct context. The Vincennes crew projected an attack because their comprehension — threat aircraft descending — was generated by the wrong context. Under the correct context — commercial aircraft climbing from a civilian airport — the projection would have been entirely different.
The critical insight from Endsley's model is that situational awareness is not passive. It does not happen to you. It is a cognitive activity you must perform, and the first step of that activity is asking: what context am I currently operating in?
The cost of context blindness
You do not need to be in a combat zone for context misreads to cause damage. The same mechanism operates everywhere, just with lower stakes — usually.
In conversations. Your partner says "we need to talk." If you load the context of the last argument you had, you hear a threat. If you load the context of the vacation you are planning together, you hear logistics. If you load the context of a relationship that is going well, you hear a neutral scheduling request. The words are identical. Your interpretation is entirely determined by which context you loaded — and you loaded it automatically, without choosing it, based on whatever emotional residue was sitting in your working memory from the last interaction.
In organizations. A manager reads quarterly numbers that show a 12% revenue decline. If she loads the context of the broader market, which declined 18%, she sees outperformance. If she loads the context of the annual plan, which projected 5% growth, she sees a catastrophic miss. If she loads the context of the previous quarter, which showed flat revenue, she sees accelerating deterioration. The number — 12% decline — is a fact. What it means depends entirely on which context she uses to interpret it. Most managers do not choose their context. They use whichever one was activated most recently — by the last email they read, the last meeting they attended, the last person who spoke to them with urgency.
In learning. A student reads a research paper and judges it as groundbreaking or unremarkable based on the context of what they already know. If they just read an introductory textbook, the paper seems sophisticated. If they just read the cutting edge of the field, the same paper seems derivative. The paper did not change. The student's loaded context — the reference frame against which they are evaluating — changed everything. Without asking "what context am I using to evaluate this?" the student cannot distinguish between their assessment of the paper and their assessment of the gap between the paper and their most recently loaded reference point.
Ellen Langer, the Harvard psychologist who pioneered the Western scientific study of mindfulness, identified this pattern as "premature cognitive commitment" — the tendency to lock in an interpretation based on the context of first exposure and then never revisit it, even when the context changes. In her experiments, Langer demonstrated that information initially perceived as irrelevant — because the context of initial exposure marked it as unimportant — remained unused even when later contexts made it directly relevant. Subjects were "victims of their premature cognitive commitments," trapped in a context they never chose and never questioned (Langer & Piper, 1987).
The antidote, Langer argued, is not more information. It is "a flexible state of mind in which we are actively engaged in the present, noticing new things and sensitive to context" (Langer, 1989). That flexibility is exactly what the question "what context am I in?" is designed to produce. It breaks the automaticity. It forces a re-evaluation. It interrupts the premature cognitive commitment before it hardens into a fixed interpretation.
How experts read context: recognition-primed decisions
If asking about context is so important, why do experts sometimes seem to skip it? Watch an experienced emergency room doctor assess a patient, or a veteran firefighter reading a burning building, or a seasoned trader scanning market data — they appear to act instantly, without pausing to identify context.
They are not skipping the question. They are answering it so fast you cannot see the process.
Gary Klein's research on naturalistic decision-making revealed that experts in high-stakes domains do not make decisions by comparing options. They make decisions by recognizing situations. Klein's Recognition-Primed Decision (RPD) model, developed from studying firefighters, military commanders, nurses, and other domain experts, showed that experienced decision-makers focus first on reading the situation — identifying what type of situation they are in — before generating any course of action (Klein, 1998).
This "reading" is exactly context identification, compressed by years of practice into a rapid pattern-matching operation. The expert firefighter walks into a structure fire and, within seconds, categorizes the situation: residential structure, ventilation-limited fire, possible basement origin, occupants likely upstairs. That categorization is not a description of what they see. It is a context identification — a frame that determines what every subsequent piece of information means. Smoke behavior that is normal in a ventilation-limited fire is alarming in a fuel-limited fire. The same sensory data means different things in different contexts. The expert knows this because they have encountered enough situations to have built a library of contexts.
What Klein's research also showed is the failure pattern: experts make their worst mistakes when they load the wrong context and do not notice. They recognize a pattern, load the corresponding context, and proceed with confidence — but the pattern was misleading, and they are now operating in the wrong frame. The experienced doctor who treats a heart attack presentation that is actually a panic attack. The veteran pilot who responds to an engine indication as a standard procedure when the actual situation is a novel failure mode. The senior manager who treats a cultural problem as a process problem because it looks like one from the outside.
The difference between an expert who catches these errors and one who does not is whether they continue to ask "is this really the context I think it is?" after the initial recognition. The best experts treat their context identification as a hypothesis, not a conclusion. They keep scanning for disconfirming evidence — signals that the context they loaded does not match the situation they are actually in.
The OODA loop: context as the center of combat decision-making
The most compressed version of this practice comes from Colonel John Boyd, the U.S. Air Force fighter pilot who developed the OODA loop — Observe, Orient, Decide, Act — as a model of combat decision-making. Boyd developed the framework from his experience in the Korean War, where American pilots flying technically inferior F-86 Sabres consistently defeated Soviet MiG-15s. The advantage was not the aircraft. It was the speed at which American pilots could cycle through the decision loop (Boyd, 1987).
The critical insight that most people miss about the OODA loop is that the most important step is not "Observe" or "Decide." It is "Orient." Orientation is Boyd's term for context identification — the process of understanding what situation you are in based on your cultural traditions, previous experiences, genetic heritage, current observations, and analytic capabilities. Boyd argued that orientation shapes everything that follows: what you observe (because your orientation determines what you look for), how you decide (because your orientation determines what options you consider), and how you act (because your orientation determines what actions feel available).
Boyd called orientation "the schwerpunkt" — the center of gravity of the entire loop. Get orientation wrong and the rest of the loop amplifies the error. Get it right and the rest of the loop becomes almost automatic. This is why Boyd emphasized that the key to competitive advantage is not acting faster. It is re-orienting faster — updating your context identification faster than your opponent updates theirs.
For your purposes, the OODA loop translates directly to epistemic practice. Before you interpret (Orient), you must observe what is actually present (Observe). Before you commit to a response (Decide), your orientation must match the actual situation. And before you act, your decision must be grounded in accurate context. The pilot who re-orients fastest wins the dogfight. The knowledge worker who re-contextualizes fastest makes the best decisions.
Sensemaking: constructing context from ambiguity
But what do you do when the context is not obvious? When you walk into a situation and genuinely do not know what is going on?
Karl Weick's sensemaking framework addresses exactly this problem. Weick studied how people construct understanding from ambiguous, confusing, or novel situations — the moments when context has not been given to you and you must build it yourself. His research, spanning decades and including landmark analyses of disasters like the Mann Gulch wildfire and the Tenerife airport collision, identified seven properties of how humans make sense of ambiguous situations (Weick, 1995).
Three of Weick's properties are directly relevant to the practice of context identification.
Enactment. You do not passively receive context. You partially create it through your actions. When you walk into an ambiguous meeting and immediately start presenting slides, you have enacted a "presentation" context that shapes how everyone else behaves. When you start asking questions instead, you enact a "discovery" context. Your behavior does not just respond to context — it generates context. This means the question "what context am I in?" must include "what context am I creating?"
Extracted cues. In complex situations, you cannot process everything. You extract a small number of cues and use them to construct a plausible picture of the whole. The cues you extract are determined by what you are looking for, which is determined by your current context. This creates a self-reinforcing loop: your context determines which cues you notice, and the cues you notice confirm your context. Breaking this loop requires deliberately looking for cues that would disconfirm your current context reading — actively searching for evidence that you are in a different situation than you think.
Plausibility over accuracy. Sensemaking does not aim for truth. It aims for a story that is good enough to act on. This is both its power and its danger. The power: you can orient and act even in ambiguous situations without perfect information. The danger: a plausible but wrong context feels just as certain as an accurate one. Your brain does not tag its sensemaking outputs with confidence levels. A story that fits feels true, regardless of whether it is.
Weick's framework reveals why context identification must be an active, ongoing, self-critical practice. You are not reading context from the environment like a thermometer reads temperature. You are constructing context through a process that is shaped by your identity, your recent experiences, and the cues you happen to extract — and all of these can be wrong without triggering any subjective alarm.
AI and the Third Brain: context as the critical input
Artificial intelligence systems provide the clearest possible demonstration of why explicit context identification matters — because AI systems cannot function without it, and they cannot generate it on their own.
Every large language model operates within a context window — a finite space of text that constitutes everything the system knows about the current situation. Outside that window, the model has no access to information, no memory of previous interactions, and no understanding of what situation it is in. If you ask an AI for medical advice without specifying that you are a 35-year-old athlete asking about knee pain from running, the system has no way to load the relevant context. It will generate a response based on whatever default context its training data suggests — which may be entirely wrong for your situation.
Research on context engineering — the discipline of designing what goes into an AI's context window — has demonstrated that model performance degrades dramatically when context is insufficient or mismatched. Studies from Stanford and UC Berkeley found that even models with million-token context windows show significant accuracy drops when relevant context is buried among irrelevant information, a phenomenon called the "lost in the middle" problem (Liu et al., 2024). The AI has the information. It cannot find it because the context is not structured to make it salient — the same failure pattern as the Vincennes crew.
This creates a direct parallel for your own cognition. Your brain, like an AI model, operates within a context window — the set of activated information, assumptions, goals, and frames that you are currently using to interpret incoming data. Like an AI, you cannot process everything at once. Like an AI, the context you have loaded determines the quality of your outputs. And like an AI, you need explicit context loading — a deliberate act of identifying and activating the relevant frame — to produce accurate interpretations.
The difference is that AI systems cannot ask "what context am I in?" on their own. They depend on humans to provide context through prompts, system instructions, and retrieved documents. You have the capacity to ask this question for yourself. You can step back from your current interpretation, examine the context you have loaded, and ask whether it matches the actual situation. This capacity — metacognitive context identification — is one of the most important cognitive advantages humans have over current AI systems. But it is only an advantage if you actually use it.
The practical partnership works like this: use AI as a context-checking tool. When you have identified your context and formed an interpretation, describe the situation to an AI system and ask: "What context might I be missing? What alternative frames could apply here?" The AI can generate alternative contexts drawn from its training data — contexts that your experience and emotional state might prevent you from seeing. You provide the situational knowledge. The AI provides the contextual breadth. Together, you cover more of the possibility space than either could alone.
The Context Identification Protocol
Here is the systematic practice for building context identification into your cognitive infrastructure.
Step 1: Detect the transition. Context identification is most critical at transitions — when you shift from one activity to another, receive new information, enter a new environment, or begin interacting with a different person. Train yourself to notice transitions. The moment you feel the shift — walking into a room, opening an email, hearing unexpected news — that is your trigger.
Step 2: Pause before interpreting. This is the hardest step because your brain generates interpretations instantly and automatically. The pause does not need to be long. Two seconds is enough. The purpose is not to empty your mind. It is to create a gap between stimulus and interpretation — a gap where you can ask the question.
Step 3: Run the five-question scan. (1) What environment am I in? (2) What role am I occupying? (3) What just happened that might be coloring my perception? (4) What are the goals — mine and the other parties'? (5) What assumptions am I importing from a previous context?
Step 4: Name the context explicitly. Do not leave it vague. State it: "I am in a brainstorming context, not a decision-making context." "I am in a learning context, not an evaluation context." "I am in a support context, not a problem-solving context." Naming the context makes it available for examination. Unnamed contexts operate invisibly.
Step 5: Scan for disconfirming cues. Once you have named your context, spend five seconds looking for evidence that you are wrong. What signals, if present, would indicate you are in a different context than you think? If you find any, update. If you find none, proceed — but keep scanning.
Step 6: Re-scan at intervals. Context shifts. The meeting that started as a brainstorm is now a decision. The conversation that started as casual is now serious. Set a mental timer — every fifteen minutes in a long interaction, re-run the scan. Ask again: is this still the context I think it is?
Why this matters for everything that follows
Context identification is not a nice-to-have. It is the prerequisite for every cognitive operation in this curriculum. Classification requires context — the same object belongs to different categories depending on the frame. Decision-making requires context — the same options have different values depending on the situation. Communication requires context — the same words carry different meaning depending on the relationship and setting. Schema validation requires context — the same schema is valid in one domain and invalid in another.
Every lesson you have completed so far has implicitly assumed that you know what context you are in. This lesson makes that assumption explicit and gives you a systematic practice for verifying it.
L-0163 takes this further. Once you can identify the context you are in, you face the next challenge: what happens when you need to leave one context and enter another? Context switching is not automatic. It requires deliberately unloading the current context and loading the new one. Fail to do this, and you carry the old context into the new situation — interpreting your family dinner through your work context, reading a creative brief through an analytical context, approaching a collaborative conversation through a competitive context. The context question does not end with identification. It continues through every transition in your day.
Start asking now. Before every interpretation, before every judgment, before every response: what context am I in?
Sources:
- Endsley, M. R. (1995). "Toward a Theory of Situation Awareness in Dynamic Systems." Human Factors: The Journal of the Human Factors and Ergonomics Society, 37(1), 32-64.
- Jones, D. G., & Endsley, M. R. (1996). "Sources of Situation Awareness Errors in Aviation." Aviation, Space, and Environmental Medicine, 67(6), 507-512.
- Klein, G. (1998). Sources of Power: How People Make Decisions. Cambridge, MA: MIT Press.
- Weick, K. E. (1995). Sensemaking in Organizations. Thousand Oaks, CA: Sage Publications.
- Langer, E. J. (1989). Mindfulness. Reading, MA: Addison-Wesley.
- Langer, E. J., & Piper, A. I. (1987). "The Prevention of Mindlessness." Journal of Personality and Social Psychology, 53(2), 280-287.
- Boyd, J. R. (1987). "A Discourse on Winning and Losing." Unpublished lecture notes. Air University Library, Maxwell Air Force Base, Alabama.
- Fogarty, W. M. (1988). "Formal Investigation into the Circumstances Surrounding the Downing of Iran Air Flight 655 on 3 July 1988." U.S. Department of Defense Investigation Report.
- Liu, N. F., Lin, K., Hewitt, J., Paranjape, A., Bevilacqua, M., Petroni, F., & Liang, P. (2024). "Lost in the Middle: How Language Models Use Long Contexts." Transactions of the Association for Computational Linguistics, 12, 157-173.