Eleven lines of code that broke the internet
On March 22, 2016, a programmer named Azer Koculu unpublished a package called left-pad from npm, the JavaScript package registry. The package was eleven lines of code. It did one thing: pad the left side of a string with spaces or zeros. Trivial functionality. But left-pad was a dependency of Babel, which was a dependency of React, which was a dependency of thousands of production applications at Facebook, Netflix, PayPal, and Spotify. Within hours, builds failed across the industry. CI pipelines went red. Deployment queues froze. Thousands of engineers stared at a 404 error they could not explain, because the package they had never heard of was three levels deep in a dependency chain they had never mapped.
The left-pad incident did not break anything directly. It broke a dependency, which broke a dependency, which broke everything.
Your belief system works the same way. You do not hold beliefs in isolation. Each one rests on others, supports others, and connects to a network of assumptions so deeply interwoven that you rarely see the structure until something snaps. When it does snap, the damage does not stay local. It cascades. And if you have never mapped the dependencies, you will not understand why a single changed belief can destabilize an entire domain of your thinking.
Beliefs are not freestanding. They are graphs.
The intuitive model of belief is a list: I believe X, I believe Y, I believe Z. Each belief sits independently, like books on a shelf. You can remove one without disturbing the others.
This model is wrong. Beliefs are a graph — a structure of nodes and directed edges where some beliefs depend on others, some support others, and some are only reachable through a chain of intermediaries.
Epistemologists have debated the structure of this graph for centuries. Foundationalism, one of the oldest positions in epistemology, argues that justified beliefs divide into two kinds: basic beliefs that are self-justifying, and derived beliefs that depend on basic beliefs for their justification. Descartes' "I think, therefore I am" was his candidate for the ultimate basic belief — the one node in the graph with no incoming edges, the foundation on which everything else could be built. On the foundationalist view, your belief graph is a tree: a small set of foundational nodes at the root, and every other belief deriving its justification from a path back to those roots.
W. V. Quine and J. S. Ullian rejected this architecture in The Web of Belief (1970). Quine argued that beliefs do not form a tree with a privileged foundation. They form a web — a densely interconnected network where any belief can, in principle, be revised, and where revising one belief forces adjustments throughout the connected structure. On Quine's view, even logic and mathematics sit inside the web, not beneath it. No belief is truly foundational. Every node has edges.
Quine's key insight, developed in his 1951 paper "Two Dogmas of Empiricism," was that we do not test individual beliefs against experience. We test the whole system. When experience contradicts a prediction, we have a choice about where in the web to make the adjustment. You can revise the belief closest to the observation, or you can revise something deeper in the network. The structure of your dependencies determines which revisions are cheap and which are catastrophic.
This matters for your personal epistemology because it means that understanding what you believe requires understanding how your beliefs are connected. Your schema inventory — the work of L-0324 — gave you the list of nodes. This lesson gives you the edges.
The anatomy of a schema dependency
A schema dependency exists whenever one belief requires another to be true in order to function. The relationship is directional: A depends on B means that if B changes, A must be re-evaluated.
Consider a working professional who holds the schema "I should pursue the promotion." That schema does not stand alone. It depends on several deeper schemas:
- "Career advancement leads to financial security" (a schema about how careers work)
- "Financial security is important for my family" (a schema about values)
- "This company rewards merit" (a schema about the specific environment)
- "I am capable of performing at the next level" (a schema about self-assessment)
Each of those schemas, in turn, depends on others. "This company rewards merit" might depend on "My manager has decision-making power" and "The performance review process is fair." These are transitive dependencies — they are two levels removed from the surface belief but still load-bearing.
If "This company rewards merit" collapses — say the promotion goes to someone with half the qualifications but better political connections — the surface schema "I should pursue the promotion" does not just feel less certain. It structurally loses one of its supports. Whether it collapses depends on how many other supports remain and how critical the broken one was.
This is the same logic software engineers use when analyzing dependency graphs. In any package manager — npm, Maven, pip — each package declares its direct dependencies, and those dependencies have their own dependencies. The result is a directed acyclic graph (DAG) that can be hundreds or thousands of nodes deep. The 2025 Open Source Security and Risk Analysis report found that the average software application contains more than 1,200 open-source components, with 64 percent of them being transitive dependencies — packages that no developer explicitly chose but that arrived as dependencies of dependencies. Your belief system has the same structure: most of the assumptions you operate on are transitive dependencies you never consciously adopted.
Foundational nodes and the critical path
Not all nodes in a dependency graph carry equal weight. In project management, the critical path method (CPM) — developed by Morgan Walker and James Kelley at DuPont in 1957 — identifies the longest sequence of dependent tasks in a project. Any delay on the critical path delays the entire project. Tasks off the critical path have float: they can slip without consequence.
Your schema graph has critical paths too. Some beliefs are foundational — they appear as dependencies for many other beliefs, and any change to them propagates widely. Others are peripheral — they connect to few other nodes and can change without ripple effects.
Consider two schemas:
- "Sushi restaurants in this neighborhood are overpriced."
- "I am fundamentally competent."
The first schema has few dependents. If you discover a reasonably priced sushi place, you update the belief and nothing else changes. No cascade. No existential wobble. This is a leaf node.
The second schema is foundational. "I am fundamentally competent" is a dependency for "I can handle new challenges," which is a dependency for "I should apply for that role," which is a dependency for "My career trajectory makes sense." It also supports "I deserve this relationship," "My opinions are worth sharing," and "I can recover from setbacks." If Schema 2 cracks — through a sustained failure, a harsh evaluation, a professional humiliation — the cascade propagates through dozens of dependent beliefs. The person does not experience a single belief changing. They experience what feels like an identity crisis, because the node that failed was deep in the dependency tree and half their schema graph was built on top of it.
Judea Pearl's work on causal graphs formalizes this intuition. In Pearl's framework, a directed acyclic graph represents causal relationships between variables. A node that has many "descendants" — variables that are causally downstream — has high causal influence. Intervening on that node changes the values of everything downstream. Pearl calls this the "do-calculus": the formal machinery for computing what happens when you change a variable rather than merely observing it. In your schema graph, revising a foundational belief is not an observation. It is an intervention. And Pearl's framework tells you that the effects propagate to every descendant in the graph.
How cascading failures actually work
Cascading failures in schema systems follow a predictable pattern. Understanding the pattern lets you anticipate the cascade instead of being blindsided by it.
Stage 1: A foundational schema takes a hit. Something happens that directly contradicts a belief deep in your dependency chain. Not a surface belief — those are easy to update. A load-bearing one. "People are fundamentally trustworthy." "Hard work is always rewarded." "I am in control of my outcomes."
Stage 2: Dependent schemas lose support. The beliefs that relied on the foundational schema do not immediately collapse. They become unstable. You notice a vague unease, a loss of confidence in things that seemed certain last week. You cannot pinpoint why your career plan suddenly feels hollow or why you are second-guessing your relationship. The dependency is invisible, so the cascade feels inexplicable.
Stage 3: Compensatory rationalization. Rather than tracing the cascade to its source, you try to shore up the dependent schemas independently. You generate new justifications for beliefs that already lost their foundation. This is cognitively expensive and structurally futile — like adding paint to a wall whose studs have rotted.
Stage 4: Cascading collapse or deliberate restructuring. If the compensation fails and enough dependent schemas destabilize, you hit a tipping point. Multiple beliefs fail simultaneously. The experience is disorienting and often gets labeled with clinical-sounding terms: crisis of faith, burnout, identity crisis, existential dread. Alternatively — and this is the goal of this lesson — you recognize the pattern, trace the cascade to the foundational node, and restructure deliberately.
Recent research in computational psychology has begun to formalize this pattern. Jonas Dalege and colleagues, building on the Causal Attitude Network (CAN) model developed with Denny Borsboom, have demonstrated empirically that when a targeted attitude is changed experimentally, non-targeted attitudes connected to it in the belief network also shift — and the closer they are in the network, the more they change. The research confirms what Quine theorized: beliefs are not independent. Changing one propagates through the structure. The network's topology determines the propagation pattern.
Dependency mapping in AI and extended cognition
If you delegate cognitive work to AI systems, you are building a second dependency graph — one that extends beyond your own mind.
Apache Airflow, the dominant workflow orchestration platform, models every data pipeline as a DAG: a directed acyclic graph of tasks with explicit dependencies. Task B cannot run until Task A completes successfully. Task C depends on both A and B. The graph makes the dependency structure visible, inspectable, and debuggable. When a task fails, the orchestrator knows exactly which downstream tasks are affected. It does not guess. It reads the graph.
Machine learning pipelines have the same structure. Feature engineering depends on data cleaning. Model training depends on feature engineering. Model evaluation depends on training. Deployment depends on evaluation passing. Each stage is a node. Each dependency is an edge. If your data cleaning step introduces a subtle error — say, a timezone conversion that silently shifts timestamps by an hour — the error propagates through every downstream node. The model trains on corrupted features. The evaluation looks fine because it uses the same corrupted data. The deployment goes live. The failure manifests weeks later in production predictions that are consistently wrong for a reason nobody can trace, because nobody mapped the dependency from the surface failure back to the root cause.
This is the AI parallel to the schema cascade. Your AI-augmented thinking has dependencies too. If you prompt an LLM with assumptions drawn from a stale or broken schema, the model produces outputs that inherit the error. Those outputs inform your decisions, which inform your actions, which produce outcomes that you evaluate using the same broken schema. The dependency chain runs from your foundational beliefs through your prompts, through the model's outputs, and back into your belief system. Mapping those dependencies is not optional if you want to think clearly with AI tools.
How to map your schema dependencies
Here is a concrete protocol for building a dependency map of your schema system.
Step 1: Start with a schema from your inventory. Pick a belief you identified in L-0324 — ideally one you act on frequently. Write it at the top of a page.
Step 2: Ask "What must be true for this to hold?" List every assumption, belief, or precondition that your schema requires. Do not filter for importance. Capture everything. If your schema is "I should stay at this company," the dependencies might include: "This company has a future," "My role here aligns with my goals," "The compensation is fair relative to alternatives," "My manager supports my growth," and "Leaving would be more disruptive than staying."
Step 3: Repeat for each dependency. Take each item you listed and ask the same question: what must be true for this to hold? You are building the second layer of the graph. "This company has a future" might depend on "The market for our product is growing" and "Leadership makes sound strategic decisions." Go two or three levels deep. Beyond that, you hit beliefs that are too abstract to be actionable.
Step 4: Identify foundational nodes. Look across your map for beliefs that appear as dependencies in multiple chains. These are your high-centrality nodes — the ones where a single change cascades widely. Flag them. They deserve the most scrutiny and the most frequent verification, because the cost of them being wrong is the highest.
Step 5: Mark the fragile paths. Some dependency chains have no redundancy: the surface belief rests on a single foundational belief with no alternative support. These are your single points of failure — the left-pad nodes in your belief system. If that one dependency breaks, the surface belief has no fallback. Identify these paths and ask: can I build additional support, or should I prepare for the possibility that this chain breaks?
Step 6: Document the graph. Write it down. A dependency map that exists only in your head is not a map — it is a vague sense of connectedness. The act of externalizing the structure is what transforms intuition into infrastructure. A simple indented list works. A diagram works better. The format matters less than the externalization.
From dependencies to conflicts
Mapping dependencies reveals one more thing that a flat schema inventory cannot show: where your schemas are likely to conflict.
When two schemas share a dependency but require it to have different values — one schema needs "People are fundamentally self-interested" to be true, while another needs "People are fundamentally cooperative" — the shared dependency becomes a fault line. The schemas cannot both be fully supported. They are structurally in tension, and that tension is invisible until you map the dependencies and discover the shared node with incompatible requirements.
This is the exact problem that L-0326, schema conflict resolution, addresses. Before you can resolve conflicts between schemas, you need to see them. And you can only see them by mapping the dependency graph that connects your beliefs to their foundations and to each other.
Your schemas do not live alone. They lean on each other, rest on shared foundations, and propagate failures through invisible chains. The map is how you stop being surprised by the cascades.