The belief changed. You lost the receipt.
You used to think remote work was less productive than office work. Now you believe the opposite. When did that change? More importantly — what changed it? Was it a specific study you read? A personal experience during the pandemic? A conversation with a colleague whose judgment you trust? A slow accumulation of observations over months?
If you are like most people, you cannot answer with precision. You know the belief changed. You can articulate the new version. But the trigger — the specific piece of evidence or experience that actually shifted your model — is gone. You have the updated schema but not its provenance. You have the commit but not the commit message.
This matters more than it appears to. L-0301 established that schemas must evolve or become obsolete. L-0302 argued that updating is not admitting defeat. L-0303 showed that small, frequent updates beat large, rare overhauls. But none of that addresses a critical operational question: when you do update a schema, how do you preserve the reasoning behind the change?
Without trigger tracking, schema evolution becomes opaque even to you. You cannot evaluate whether past changes were warranted. You cannot detect patterns in what kinds of evidence actually move you. You cannot share your reasoning with others or with your future self. You end up with a pile of current beliefs and no audit trail — a mind that knows what it thinks but not why it changed.
Why provenance disappears by default
The human memory system is not designed to preserve the provenance of belief changes. It is designed to maintain a coherent current model of the world — and coherence is the enemy of accurate change-tracking.
Daniel Kahneman describes this in Thinking, Fast and Slow (2011) as the "what you see is all there is" (WYSIATI) principle. Your cognitive system constructs the most coherent story it can from currently available information, and it does so without flagging what has been lost, rewritten, or reconstructed. When a schema updates, the new version does not append itself to the old one. It overwrites it. Your brain performs an in-place update, not a versioned save.
This is compounded by hindsight bias — the well-documented tendency to believe, after learning an outcome, that you would have predicted it. Baruch Fischhoff's foundational research in the 1970s demonstrated that once people know how something turned out, they systematically misremember their prior beliefs as having been closer to the actual outcome than they were. Applied to schema evolution, this means that after you update a belief, you will remember the old belief as having been "almost there" already — minimizing the magnitude of the change and distorting your memory of what triggered it.
The Farnam Street decision journal framework addresses exactly this problem in the domain of decisions. As Shane Parrish argues, "Your brain actively edits the past to make you look better and smarter. The result is that you don't really know what you actually thought at the time you made a certain decision." The decision journal captures reasoning in real time — what you believed, what you assumed, what you expected, and what you considered — so that your retrospective self cannot silently rewrite the record. The same logic applies to schema changes. If you do not record the trigger at the moment of change, your retroactive account will be a rationalization, not a record.
The lab notebook principle: record what prompted the revision
Every functioning knowledge institution has solved this problem. Science solved it with lab notebooks. Software engineering solved it with commit messages and changelogs. Machine learning solved it with experiment tracking systems. The common principle across all of these is the same: record the reason for the change at the time of the change, because you will not be able to reconstruct it accurately later.
Scientific lab notebooks. The lab notebook is not just a record of what was done. It is a record of why decisions were made. The NIH's best practices for laboratory notebooks specify that researchers should document the goal of each experiment, the hypothesis being tested, and — critically — any deviations from the protocol along with the reasoning behind them. As a PLOS Computational Biology paper on experiment provenance notes, "as data migrate from the experimentalist's mind and notebook to publication, the lab server, the archival database, or the cloud, this essential information now vanishes." The notebook exists to prevent that vanishing. Good notes provide "justification for why certain research choices were made."
The lesson for personal epistemology is direct. Your mind is the lab. Your schemas are the hypotheses. When a hypothesis changes — when you revise a belief based on new evidence — the trigger for that revision is provenance data that you will lose if you do not write it down.
Git commit messages. In software engineering, the consensus best practice for commit messages is to explain the "why," not just the "what." As multiple widely-cited guidelines emphasize, the body of a commit message should explain "the motivation behind the change" and "highlight any notable contrasts with previous behavior." A commit message that says "changed the database query" is nearly useless. A message that says "changed the database query because the previous implementation caused N+1 queries under load, as observed in production monitoring on Jan 15" is a piece of recoverable institutional knowledge.
The parallel to schema tracking is precise. "I now believe X" is a commit with no message. "I now believe X because conversation with Dr. Y on March 3 exposed an assumption I had not examined, and their counter-example in domain Z was one I could not answer" is a schema change with provenance. The second version lets your future self evaluate whether the change was warranted, whether the trigger was strong enough to justify the update, and whether similar triggers in similar domains should produce similar updates.
Root cause analysis. Toyota's 5 Whys method, developed by Sakichi Toyoda and later formalized by Taiichi Ohno, provides another angle on trigger tracking. The technique was originally developed not for root cause analysis of failures but to understand why new manufacturing techniques were needed — that is, to trace the chain of causes that triggered a change. The method asserts that by asking "why" five times in succession, you can move from the surface trigger to the root cause.
Applied to schema changes, the 5 Whys reveals that most triggers are not atomic events but causal chains. You did not change your belief about remote work because of "the pandemic." You changed it because the pandemic forced you to work from home (trigger 1), which revealed that your commute had been consuming two hours of your most productive time (trigger 2), which led you to track your output over three months (trigger 3), which showed a measurable productivity increase (trigger 4), which conflicted with your prior model enough to force an update (trigger 5). Capturing only "the pandemic" as a trigger loses the evidential chain. Capturing the full chain preserves provenance at the resolution where it is actually useful.
Decision journals: the closest existing practice
The practice closest to trigger tracking for schema changes is the decision journal — a structured log of decisions recorded at the time they are made, designed to be reviewed against outcomes at a later date.
Annie Duke, in Thinking in Bets (2018), argues that improving decision quality requires separating the quality of a decision from the quality of its outcome. A good decision made with good reasoning can produce a bad outcome due to luck; a bad decision made with poor reasoning can produce a good outcome for the same reason. The only way to distinguish skill from luck is to track your reasoning at the time of the decision, then compare it against what actually happened. Duke emphasizes belief calibration — expressing confidence as a percentage rather than as certainty — and using truth-seeking groups for collaborative feedback on reasoning quality.
The Farnam Street decision journal template operationalizes this with specific fields: date, decision, mental and physical state, the situation as you understood it, the variables that mattered, the expected outcome, and the alternatives you considered. The power of this structure is not in any individual field but in the act of real-time capture. "By documenting and periodically reviewing the decisions you make over time, you'll get a better grasp on your state of mind and identify things like trends or common traps you find yourself falling into."
A trigger log for schema changes borrows this structure but shifts the focus from decisions to belief updates. Where a decision journal asks "What am I deciding and why?", a trigger log asks "What am I now believing differently, and what caused the change?" The fields are different — old schema, trigger, new schema, confidence — but the underlying practice is the same: real-time capture of reasoning that your future self will not be able to reconstruct accurately.
What trigger tracking reveals about you
James Pennebaker's four decades of research on expressive writing demonstrate that the act of recording cognitive and emotional events produces benefits that go beyond simple documentation. In Pennebaker's paradigm, participants who wrote about significant experiences for 15 minutes a day over four days showed measurable improvements in physical health, psychological well-being, and cognitive processing. Critically, the participants who improved most were those who used more causal and insight language — words like "because," "realize," "understand," and "reason" — suggesting that the act of constructing causal narratives about experiences is itself a cognitive intervention, not just a record-keeping exercise.
Applied to trigger tracking, this research suggests that logging what triggered a schema change is not merely archival. It is epistemic. The act of writing "I changed belief X because of evidence Y" forces you to identify the causal link between evidence and belief — a link that might otherwise remain implicit, unexamined, or fictitious. You might discover, in the act of writing, that the stated trigger is not actually what moved you. The data was the stated reason, but the real trigger was the emotional weight of a conversation. The research paper was the stated reason, but the real trigger was that someone you respect cited it. Writing exposes these gaps between your official and actual epistemology.
Over time, a trigger log also reveals your epistemic profile — the characteristic patterns in what kinds of evidence actually cause you to update your schemas. Some people are primarily moved by data. Others by direct experience. Others by authority — they change their minds when someone they respect models a different position. Others by social proof — they update when enough people around them have updated. Others primarily by emotional events — a single vivid experience outweighs months of statistical evidence.
None of these profiles is inherently better or worse. But knowing your profile is enormously valuable for epistemic self-management. If you discover that you are primarily moved by emotional experiences, you can build in checks for whether the evidence behind the emotion actually warrants the schema change. If you discover that you are primarily moved by authority, you can examine whether you are tracking evidence or tracking prestige. The trigger log is, over time, a mirror of your epistemic character — and you cannot adjust what you cannot see.
AI and the Third Brain: data provenance as institutional practice
The challenge of tracking what triggers model updates is not unique to human cognition. It is one of the central problems in machine learning engineering, and the solutions that field has developed provide a precise analog for personal epistemic infrastructure.
Model cards. Google's model card framework, introduced by Mitchell et al. in 2019, standardizes the documentation of machine learning models. A model card records what data the model was trained on, how it was evaluated, what its known limitations are, and — critically — the provenance of the information that shaped its behavior. When a model is retrained or fine-tuned, the card is updated to reflect what new data caused the change and why the change was made. The purpose is not just transparency for external audiences. It is institutional memory: ensuring that the team can reconstruct why the model behaves the way it does, months or years after the original training decisions were made.
Experiment tracking. Tools like MLflow and Weights & Biases exist specifically to solve the trigger-tracking problem at industrial scale. MLflow records each experiment as a "run" that captures the code version, parameters, metrics, and artifacts — a complete record of what was done, what changed, and what the results were. Weights & Biases tracks "every part of the model training process," including the specific data and hyperparameter changes that led to each iteration of the model. The fundamental principle of both systems is that a model whose training history is not recorded is a model that cannot be debugged, evaluated, or trusted.
Data cards. Google's complementary data card framework documents the provenance of datasets themselves — where the data came from, how it was collected, what biases it might contain, and what risks arise from its use. The purpose is to ensure that when a model's behavior changes, the team can trace that change back to the specific data that caused it. Without data provenance, a model that starts producing biased outputs cannot be diagnosed. With it, the team can identify which dataset, which collection method, or which labeling decision introduced the bias — and fix it.
The parallel to personal trigger tracking is structural, not metaphorical. Your schemas are your models. Your experiences, conversations, and observations are your training data. When a schema changes, the question "what data caused this change?" is identical in structure to the question an ML engineer asks when a model's behavior shifts. And the consequence of not being able to answer that question is the same in both cases: you cannot evaluate whether the change was warranted, you cannot debug problems, and you cannot build on the change systematically.
If you use AI tools in your thinking — and you will, increasingly — trigger tracking becomes even more critical. When an LLM presents an argument that changes your mind, record that. When an AI-generated summary highlights a pattern you had not noticed, record that. When a conversation with an AI assistant surfaces an assumption you had not examined, record that. Your Third Brain should have the same provenance standards as any well-run ML system: every significant change traceable to its cause.
Protocol: maintaining a trigger log
Here is a practical protocol for trigger tracking that you can start immediately and sustain indefinitely.
Step 1: Choose a capture medium. The format matters less than the consistency. A section in a notebook, a dedicated digital document, a running note in your preferred app. The requirement is that it is always accessible when you notice a schema change, and that entries accumulate in chronological order.
Step 2: Define the entry format. Each entry records five fields:
- Date: When you noticed the change.
- Schema (before): What you previously believed, stated concisely.
- Trigger: The specific evidence, experience, conversation, or observation that initiated the change. Be as precise as possible. "Read an article" is weak. "Read Smith's 2024 analysis showing that X correlates with Y at r=0.7, which directly contradicted my assumption that X and Y were independent" is strong.
- Schema (after): What you now believe or are moving toward.
- Trigger type: Categorize the trigger — data, direct experience, authority, social proof, emotional event, logical argument, or other. This field builds your epistemic profile over time.
Step 3: Log in real time. The entry must be written when the change is noticed, not retrospectively. If you cannot write a full entry, write at minimum the date and trigger — the two fields most vulnerable to memory distortion. Fill in the rest within 24 hours.
Step 4: Review monthly. At the end of each month, read through your trigger log and ask three questions: (1) What patterns appear in my trigger types? (2) Were any changes disproportionate to the evidence — strong updates from weak triggers, or weak updates from strong triggers? (3) Are there schema changes I remember making but did not log? The unlogged changes are the ones most likely to have been driven by motivated reasoning rather than evidence.
Step 5: Feed forward to versioning. Each trigger log entry is an input to the schema versioning practice introduced in L-0305. The trigger is the "why" of the version change. Without it, version histories are just a sequence of states with no explanatory connective tissue. With it, each version is a node in a causal chain — linked to the evidence that produced it and available for future evaluation.
From triggers to versions
Tracking what triggered a schema change is not an end in itself. It is the foundation for something more powerful: a versioned, auditable history of your own thinking. When you know not just what you believe but why each belief changed, you gain the ability to evaluate your own epistemic trajectory — to see whether your schemas are converging on accuracy or drifting with social currents, whether your updates are evidence-driven or emotion-driven, whether your epistemic infrastructure is improving over time or merely churning.
L-0305 introduces explicit schema versioning — labeling each version of a schema so you can compare current thinking to past thinking. But versioning without trigger tracking is like a changelog without commit messages: you can see that something changed, but you cannot understand why. The trigger log provides the explanatory layer that makes versioning meaningful.
Every serious knowledge system — from scientific journals to software repositories to machine learning experiment trackers — maintains both the record of change and the reason for change. Your personal epistemic infrastructure deserves the same standard. The belief changed. Keep the receipt.
Sources
- Kahneman, D. (2011). Thinking, Fast and Slow. Farrar, Straus and Giroux.
- Fischhoff, B. (1975). Hindsight is not equal to foresight: The effect of outcome knowledge on judgment under uncertainty. Journal of Experimental Psychology: Human Perception and Performance, 1(3), 288-299.
- Duke, A. (2018). Thinking in Bets: Making Smarter Decisions When You Don't Have All the Facts. Portfolio/Penguin.
- Parrish, S. (2014). How a decision journal changed the way I make decisions. Farnam Street. https://fs.blog/decision-journal/
- Pennebaker, J. W. (2018). Expressive writing in psychological science. Perspectives on Psychological Science, 13(2), 226-229.
- Ohno, T. (1988). Toyota Production System: Beyond Large-Scale Production. Productivity Press.
- Mitchell, M., et al. (2019). Model cards for model reporting. Proceedings of the Conference on Fairness, Accountability, and Transparency, 220-229.
- Pushkarna, M., Zaldivar, A., & Kjartansson, O. (2022). Data cards: Purposeful and transparent dataset documentation for responsible AI. Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 1776-1826.
- Sandoval, S., et al. (2015). Ten simple rules for experiments' provenance. PLOS Computational Biology, 11(10), e1004384.