You are narrating right now and you don't know it
Something happened to you today. Maybe a meeting ran long. Maybe someone gave you feedback. Maybe you checked your phone and a message wasn't there. Whatever it was, you didn't just experience it. You storied it. You took raw sensory data — words, tones, timestamps, facial expressions — and in less than a second, you constructed a narrative about what it meant, why it happened, and what it says about you or the other person.
This is not a flaw. Jerome Bruner, the cognitive psychologist who founded narrative psychology, argued in his 1991 paper "The Narrative Construction of Reality" that humans operate in two fundamentally different modes of thought: the paradigmatic mode, which deals in logic, categories, and verifiable propositions, and the narrative mode, which deals in intentions, actions, and meaning. Both are legitimate. But only one of them tells you what actually happened. The other tells you a story about what happened — and you experience that story as reality.
Bruner's insight was not that stories are bad. It was that humans cannot help constructing them. You are a compulsive narrator. The moment raw data enters your perception, your mind is already selecting, sequencing, and interpreting it into a coherent account. As Sartre put it: "A man is always a teller of stories; he lives surrounded by his own stories and those of other people; he sees everything that happens to him in terms of these stories."
The problem is not that you tell stories. The problem is that you cannot tell the difference between the story and the data it was built from. When those two blur together, you lose the ability to question your own conclusions — because you no longer realize you drew them.
The ladder you're climbing without knowing it
In 1970, Harvard professor Chris Argyris introduced a model called the Ladder of Inference to describe the invisible cognitive sequence that takes you from raw observation to confident action. The model has seven rungs:
- Observable data — everything available to your senses in a given moment.
- Selected data — the subset you actually notice, filtered by your prior experiences and expectations.
- Interpreted data — the meaning you assign to what you selected.
- Assumptions — the unstated premises you layer onto your interpretation.
- Conclusions — the judgments you derive from your assumptions.
- Beliefs — the generalized convictions that form from repeated conclusions.
- Actions — what you do based on those beliefs.
The entire climb happens in less than a second. You witness a coworker checking their phone during your presentation (rung 1). You select that datum over the nineteen other people who are paying attention (rung 2). You interpret the phone-checking as disengagement (rung 3). You assume they find your work boring (rung 4). You conclude they don't respect you (rung 5). You believe this team doesn't value your contributions (rung 6). You stop volunteering for high-visibility projects (rung 7).
At no point in this sequence did you consciously decide to climb. Each rung felt like a natural extension of the one below it. That's what makes it dangerous: by the time you reach your conclusion, it feels like a fact. But the only actual fact was rung 1 — someone looked at their phone. Everything above it was story.
The fact-story distinction this lesson teaches is the ability to see which rung you're standing on and climb back down to the data.
CBT figured this out in the 1950s
Cognitive Behavioral Therapy formalized the fact-story split decades before it became a popular self-help concept. Albert Ellis, who founded Rational Emotive Behavior Therapy in 1957, built the entire therapeutic framework around what he called the ABC model:
- A (Activating event): The objective, observable situation — what a video camera would record.
- B (Belief): Your interpretation of the event — the story you construct.
- C (Consequence): Your emotional and behavioral response — which follows from B, not from A.
The revolutionary insight was that C does not follow from A. Your anger after receiving critical feedback is not caused by the feedback itself. It is caused by your belief about the feedback — that it means you're incompetent, that your boss has lost confidence in you, that your career is stalling. Change the belief and the emotional consequence changes, even though the activating event stays identical.
Aaron Beck, working independently in the 1960s, arrived at the same architecture. He observed that his depressed patients were not responding to objectively terrible circumstances — many had ordinary lives. They were responding to what he called automatic thoughts: rapid, unexamined interpretations that they experienced as facts. "Nobody likes me" was not a conclusion his patients had carefully reasoned their way toward. It was a story that fired automatically, below the threshold of awareness, and was experienced as observable reality.
The clinical finding that changed everything: when patients learned to identify the boundary between A and B — between what happened and what they told themselves about what happened — their symptoms improved. Not because the world changed. Because they could finally see where observation ended and narration began.
Four questions that expose the story
Byron Katie's method, called "The Work," offers one of the simplest frameworks for separating fact from story in real time. When you notice a stressful thought — "My manager doesn't trust me," "This project is going to fail," "They're going to think I'm incompetent" — you run it through four questions:
- Is it true?
- Can you absolutely know that it's true?
- How do you react — what happens — when you believe that thought?
- Who would you be without the thought?
The first two questions force you back down the ladder. "My manager doesn't trust me" — is that a fact? Can you verify it the way you verify a timestamp or a bank balance? In most cases, the answer reveals that the thought is not an observation of the world but a narration about it. Questions three and four then show you the consequences of treating narrative as fact: the stress, the avoidance, the preemptive defensiveness — all generated not by what happened but by the story you layered on top.
Katie's central insight maps directly onto the Ladder of Inference: most human suffering is generated not at rung 1 but at rungs 3 through 6, where interpretation, assumption, and belief operate below conscious awareness. The four questions are a structured way to make those rungs visible.
Where engineering gets this right — and wrong
Software engineering has one of the clearest institutional examples of the fact-story distinction in practice: the blameless postmortem.
When an incident occurs — a production outage, a data loss event, a security breach — the postmortem process, as codified by Google's Site Reliability Engineering team, demands a strict separation between facts and stories. The timeline section records only observable data: "At 14:23 UTC, deploy SHA abc123 was pushed to production. At 14:27 UTC, error rate exceeded 5% threshold. At 14:31 UTC, on-call engineer received page." No motives. No judgments. No "the team was careless" or "we should have known better."
This discipline exists because Google discovered that when stories infiltrate incident reports, two things happen. First, people stop telling the truth — because blame makes honesty dangerous. Second, the actual systemic cause gets buried under a narrative about individual failure. "The engineer made a mistake" is a story. "The deployment pipeline lacked a canary stage that would have caught the regression before full rollout" is a finding that leads to a systemic fix.
The blameless postmortem is not blameless because Google is generous. It is blameless because blame is a story, and stories prevent you from seeing the facts that would actually prevent recurrence. As the Google SRE Workbook states: when postmortems shift from allocating blame to investigating systematic reasons why individuals had incomplete or incorrect information, effective prevention plans can be put in place.
This principle extends far beyond engineering. Any time you review a failure — a missed deadline, a lost client, a difficult conversation — you face the same choice: record the facts and examine the system, or tell a story about whose fault it was. The story feels satisfying. The facts produce change.
Using AI to separate data from narrative
One of the most practical applications of the fact-story distinction is using AI to make the boundary visible in your own writing, communication, and thinking.
Large language models are remarkably good at claim extraction — identifying which statements in a passage are verifiable assertions and which are interpretations, opinions, or assumptions. Recent research on atomic claim extraction from text has formalized this capability, evaluating extracted claims on dimensions like atomicity (is this a single verifiable claim?), faithfulness (does this match the source?), and decontextualization (can this claim stand alone?).
You can apply this right now. Take any email, meeting summary, performance review, or journal entry you've written and ask an AI assistant: "Separate the factual claims in this text from the interpretations, assumptions, and judgments." The results are often startling. A paragraph you thought was purely descriptive turns out to be 20% observation and 80% narrative. A "factual" incident summary contains embedded assumptions about people's motives on nearly every line.
This works because the fact-story distinction is structural, not subjective. "Revenue declined 12% quarter over quarter" is verifiable. "The sales team lost focus" is a story. "Three customer meetings were canceled in the last week" is data. "Clients are losing confidence in us" is an inference. An AI can flag the boundary because the boundary is about the form of the statement, not just its content.
The practice also works in reverse: before sending a difficult email or presenting a challenging conclusion, run your draft through the fact-story filter. Ask: "Which of these statements would survive if I had to prove them with timestamps, screenshots, or measurements?" Strip the narrative. Lead with the data. Then — and only then — offer your interpretation, clearly labeled as interpretation. The difference in how people receive your communication will be immediate.
Protocol: the two-column practice
This is not a concept to agree with. It is a skill to practice. Here is the protocol:
-
Identify a charged situation. Pick something from the last 48 hours that generated frustration, anxiety, or judgment. The emotional charge is your signal that narrative is active.
-
Draw two columns. Left column: "What a camera would record." Right column: "The story I told."
-
Fill the left column first. Only observable data: words actually spoken (in quotes), actions physically taken, timestamps, measurable outcomes. If you can't verify it with a recording device, it does not belong in this column.
-
Fill the right column. Every interpretation, assigned motive, assumption about internal states, prediction, and conclusion. Be honest. Most of the content from your original memory of the event will land here.
-
Notice the ratio. In a typical exercise, the story column is three to five times longer than the fact column. That ratio is the measure of how much narrative you are generating per unit of observation.
-
Optional — run it through AI. Paste your original account of the situation into a language model and ask it to separate factual claims from interpretations. Compare its output with your two columns.
The goal is not to eliminate stories. Stories are how humans make meaning. The goal is to see the seam — the exact moment where observation ends and narration begins. Once you can see that seam reliably, you can choose which stories to keep, which to question, and which to discard.
This is the foundation for what comes next. In L-0092: Multiple perspectives reveal blind spots, you'll discover that the stories you tell aren't just one possible interpretation — they're one of many, and the perspectives you never considered are precisely where your blind spots live. But you can only access multiple perspectives once you've separated your current narrative from the underlying facts. Otherwise, you're not exploring different viewpoints — you're defending a story you've mistaken for reality.
Sources
- Bruner, J. (1991). "The Narrative Construction of Reality." Critical Inquiry, 18(1), 1-21.
- Argyris, C. (1982). Reasoning, Learning, and Action: Individual and Organizational. San Francisco: Jossey-Bass.
- Ellis, A. (1957). "Rational psychotherapy and individual psychology." Journal of Individual Psychology, 13, 38-44.
- Beck, A.T. (1967). Depression: Clinical, Experimental, and Theoretical Aspects. New York: Harper & Row.
- Katie, B. (2002). Loving What Is: Four Questions That Can Change Your Life. New York: Harmony Books.
- Beyer, B. et al. (2016). Site Reliability Engineering: How Google Runs Production Systems. Sebastopol: O'Reilly Media. Chapter on Postmortem Culture.
- Chen, Y. et al. (2025). "Claim Extraction for Fact-Checking: Data, Models, and Automated Metrics." arXiv:2502.04955.