You remember your decisions wrong
A surgeon decides to operate. The patient recovers. The surgeon remembers carefully weighing the evidence, consulting the literature, factoring in comorbidities. She remembers a confident, well-reasoned choice.
Another surgeon makes the identical decision on identical evidence. The patient dies from complications. This surgeon remembers feeling rushed, remembers doubts she should have listened to, remembers the case as borderline from the start.
Same decision process. Different outcomes. Completely different memories of how the decision was made.
This is not a thought experiment. Baron and Hershey demonstrated exactly this pattern in 1988 across five experiments. They presented participants with identical medical decisions that differed only in outcome — success or failure. Participants consistently rated the decision-making process as better when the outcome was good and worse when the outcome was bad, despite the process being the same. The effect held even when participants explicitly agreed that outcomes should not influence their evaluation of decision quality.
Your memory of why you decided something is not a recording. It is a reconstruction — and the reconstruction is contaminated by what happened next.
The hindsight machine inside your head
In 1975, Baruch Fischhoff ran the study that named this phenomenon. He gave participants background information about an obscure historical conflict between British and Nepalese forces in 1814, then told four groups different outcomes — British victory, Nepalese victory, stalemate with peace, stalemate without peace. A fifth group received no outcome. Each group estimated the probability of all four results.
The finding was stark: people who were told a specific outcome occurred rated that outcome as significantly more probable than people who were not told any outcome. Once you know what happened, your brain silently revises the probabilities you would have assigned beforehand. Fischhoff called it the "knew-it-all-along" effect.
Fifty years and hundreds of replications later, the finding is iron: you cannot accurately remember what you believed before you learned the outcome. Your brain doesn't store the original belief and then flag the update. It overwrites the original. The previous version is gone.
This means every undocumented decision is vulnerable. Not to forgetting — to rewriting. You will remember your reasoning, but it will be the wrong reasoning. It will be a version edited by hindsight to be more consistent with what actually happened. And you will have no way to detect the edit, because the original was overwritten.
Why the "why" matters more than the "what"
Most people, if they track decisions at all, write down what they decided. "Accepted the job offer." "Chose the React framework." "Invested in index funds." This is a changelog. It tells you nothing useful.
The value of a decision record lives entirely in the reasoning — the assumptions, constraints, alternatives considered, and expected outcomes at the time of the decision. This is what Annie Duke, former professional poker player and author of Thinking in Bets, calls the difference between resulting and process evaluation.
Resulting is the default human behavior: judging a decision by its outcome. You got promoted after taking the risky project, so it was a good decision. The startup failed, so joining it was a bad decision. Duke points out that this reasoning is identical to concluding that a poker player who won a hand played well. In any domain with uncertainty — which is every domain — good processes produce bad outcomes and bad processes produce good outcomes regularly enough that outcome alone tells you almost nothing about decision quality.
Process evaluation requires knowing what the process actually was. Not what you remember it being after the outcome became known. What it was at the time, documented at the time, in your own handwriting or keystrokes, before hindsight could edit it.
This is what Daniel Kahneman recommended in a now-famous suggestion: "Go down to a local drugstore and buy a very cheap notebook and start keeping track of your decisions." The specific instruction was to write down what you expect to happen and why you expect it to happen, at the moment of the decision — before the outcome can contaminate your memory of the reasoning.
Kahneman wasn't offering productivity advice. He was prescribing a direct countermeasure to a cognitive bias that is otherwise undetectable from the inside.
The four fields that make a decision record useful
A decision journal does not need to be elaborate. Overdesigning the format is a common failure mode — people build beautiful templates, fill out the first three entries with care, and abandon the practice because it takes 20 minutes per decision. The goal is capturing the reasoning in under 5 minutes.
Four fields are sufficient:
1. The decision. What you chose, stated simply. "Hired candidate A over candidate B." "Migrated to PostgreSQL instead of staying on MongoDB." "Turned down the speaking engagement."
2. The reasoning. The 2-4 factors that drove the choice. Not a dissertation — the actual reasons, in order of weight. "Candidate A had direct experience with our stack, asked better questions in the systems design round, and the team unanimously preferred working with her. Candidate B had stronger credentials on paper but couldn't articulate trade-offs clearly."
3. The expected outcome. What you think will happen as a result, including your confidence level. "I expect candidate A to be productive within 3 weeks. 70% confident she'll still be on the team in a year. Main risk: she may find the role too narrow given her previous scope."
4. The review date. When you will look at this entry again and evaluate the outcome against your expectations. Default to six months for significant decisions, one month for smaller ones. Shane Parrish of Farnam Street recommends six months as the standard review window — long enough for outcomes to materialize, short enough to remember the context.
That's it. The discipline is in the practice, not the sophistication of the template.
Prospective hindsight: documenting what could go wrong
Gary Klein introduced the premortem technique in a 2007 Harvard Business Review article, building on research by Mitchell, Russo, and Pennington from 1989. Their study, "Back to the Future," demonstrated that imagining a future event as if it had already occurred increased the ability to generate plausible explanations by 30%.
The premortem inverts the usual risk assessment. Instead of asking "what could go wrong?" — which triggers defensive reasoning and optimism bias — you tell the team: "It's two years from now. We made this decision and it was a disaster. Write down what went wrong." Kahneman himself endorsed this approach, calling it one of the most effective debiasing techniques available.
The power of the premortem for decision documentation is that it forces you to capture the dissenting view at the moment of the decision. Most decision records only preserve the reasons in favor. But six months later, when the decision has gone sideways, the most valuable thing you could have is a record of what you were worried about before you committed — and why you chose to proceed anyway.
Add a fifth field to your decision record for significant choices: Pre-mortem risks. Two or three sentences about how this decision could fail. This is the information that hindsight bias will most aggressively erase, because once a decision goes wrong, your brain will manufacture the memory that you "always knew" it was risky — which teaches you nothing. The record of your specific, named concerns teaches you everything.
Architecture Decision Records: the engineering precedent
Software engineering solved this problem at the organizational level in 2011 when Michael Nygard published "Documenting Architecture Decisions." He observed that as projects age and team members rotate, the reasoning behind architectural choices evaporates. New engineers encounter a codebase and ask, "Why did we use this database? Why is this service split this way? Why are we not using the obvious approach?" Nobody remembers. The decision is visible in the code. The reasoning is gone.
Nygard's solution was the Architecture Decision Record (ADR) — a short document with a fixed structure: Title, Status, Context, Decision, Consequences. The critical section is Context — the constraints, trade-offs, and alternatives that existed at the time. Without context, a decision record is just a changelog entry. With context, it becomes a learning artifact that future team members (or your future self) can evaluate against the actual outcome.
The ADR pattern has since been adopted by organizations including the UK Government Digital Service, ThoughtWorks, and hundreds of open-source projects. The format works because it's lightweight enough to actually use and structured enough to capture the information that matters.
Your personal decisions deserve the same discipline. You are the aging codebase. Your future self is the new engineer who has lost the context. The decision journal is your personal ADR log.
What a decision journal reveals over time
The compound value of a decision journal does not appear in the first week. It appears after three months of entries and your first review cycle. That is when patterns emerge that are invisible in the moment:
You discover your actual decision-making tendencies. Not the ones you believe you have — the ones you actually exhibit. You may discover that you consistently underweight a specific type of risk. Or that your confidence levels are systematically miscalibrated — you assign 80% confidence to outcomes that materialize 50% of the time. Or that you make worse decisions under specific conditions (time pressure, social pressure, fatigue) and better ones under others.
You separate luck from skill. Some of your good outcomes came from good decisions. Others came from bad decisions that got lucky. Without a record of your reasoning and expected outcomes, you cannot tell the difference. With one, you can identify which of your decision patterns actually work and which ones happened to work once.
You build genuine calibration. Over dozens of entries, your confidence estimates become feedback signals. If you say "70% confident" about outcomes and those outcomes occur 70% of the time, your internal probability estimates are well-calibrated. If your "70% confident" outcomes occur 40% of the time, you have a systematic overconfidence problem that is invisible without data.
Philip Tetlock's research on superforecasters, published in Superforecasting (2015), found that the single strongest predictor of forecasting accuracy was the habit of tracking predictions, reviewing outcomes, and updating beliefs based on the gap. The superforecasters were not smarter. They were more disciplined about closing the feedback loop between prediction and result. A decision journal is the personal-scale version of this practice.
Decisions as epistemic raw material for AI
Every documented decision — with its reasoning, expected outcome, and eventual result — is a data point about how your mind works. Individually, each entry is a snapshot. Collectively, they form a dataset of your cognitive patterns.
An AI system with access to your decision journal can identify patterns you would never see on your own: recurring blind spots across domains, systematic overconfidence in specific types of estimates, decision quality that degrades under identifiable conditions. It can cross-reference your decision reasoning with your captured surprises (L-0057) and surface cases where something that surprised you should have updated a decision you were about to make.
This only works if the reasoning is captured alongside the decision. An AI looking at a list of outcomes — "hired person A, chose technology B, invested in C" — has nothing to work with. An AI looking at the reasoning, constraints, alternatives rejected, and expected outcomes has material it can analyze for patterns, challenge for consistency, and compare against your own track record.
The decision journal is not just a debiasing tool. It is epistemic infrastructure — raw material that becomes exponentially more valuable as it accumulates and as the tools available to analyze it become more capable.
The practice starts smaller than you think
You do not need to document every decision. You need to document the ones that are consequential enough that you will want to learn from them later and uncertain enough that you cannot predict the outcome with high confidence. For most people, this means 2-5 decisions per week.
The format does not matter. A text file, a notebook, a spreadsheet, a dedicated app — they all work. What matters is that you capture the reasoning at the time of the decision, before the outcome has a chance to edit your memory.
Start today. Pick the next decision you face that involves genuine uncertainty. Before you commit, spend three minutes writing down: what you decided, why, what you expect to happen, and when you will review it. That is a complete decision record. It is now safe from hindsight bias. Your future self will thank you — or more precisely, your future self will have the data to actually learn from the decision instead of constructing a retrospective fiction about it.
In the next lesson, you will learn how to build capture points like this into your environment so they trigger automatically — so that the decision journal is not something you have to remember to use, but something your workflow makes unavoidable.