A team ships a breaking change. Two hundred integrations go dark.
In 2013, a team at a fast-growing SaaS company pushed a new API version to production. The new design was cleaner, more consistent, better documented. It was objectively superior to the version it replaced. Within 48 hours, over 200 customer integrations failed. Support tickets flooded in. Revenue-critical workflows ground to a halt. The new API couldn't handle request formats that the old API had accepted for years. The team had built something better and broken everything in the process.
Research from Carnegie Mellon found that nearly 28% of all API changes break backward compatibility, and that unmanaged breaking changes cost development teams an average of 15 to 20 hours per incident in emergency fixes (Xavier et al., 2017; Theneo, 2026). Organizations that implement proactive compatibility management report 70% fewer update-related incidents. The pattern is consistent: the cost of breaking what already works almost always exceeds the cost of maintaining compatibility.
This isn't just a software problem. It's a universal constraint on schema evolution. When you update any system of understanding — a codebase, a scientific theory, a personal belief — the new version must handle every case the old version handled. If it doesn't, you haven't upgraded. You've regressed.
The correspondence principle: physics solved this first
Niels Bohr formalized this constraint in 1920 as the correspondence principle: any new theory in physics must reduce to the preceding theory under the conditions where that preceding theory was known to be valid. Quantum mechanics doesn't get to ignore Newtonian mechanics. It must reproduce Newton's predictions for large masses and everyday velocities, then extend beyond them to handle phenomena Newton couldn't explain.
Einstein's general relativity exemplifies this perfectly. It doesn't discard Newton's law of universal gravitation — it subsumes it. At low velocities and weak gravitational fields, Einstein's equations reduce exactly to Newton's. The new framework handles everything the old one handled (planetary orbits, falling objects, tidal forces) and then extends to cover what Newton's framework couldn't: the precession of Mercury's perihelion, the bending of light around massive objects, the behavior of GPS satellites where relativistic time dilation matters.
This is what backward compatibility looks like at its most rigorous. The new theory passes a strict test: for every known case where the old theory gave correct results, the new theory gives the same results. Only then does it earn the right to extend into new territory.
Thomas Kuhn, in The Structure of Scientific Revolutions (1962), described how paradigm shifts work in practice. A crisis emerges when the existing paradigm can't solve certain anomalies. A new paradigm arises to resolve those anomalies. But Kuhn noted a key requirement: the replacement paradigm had better solve the majority of the puzzles the old one solved, or it won't be worth adopting. Revolutionary science isn't about starting over. It's about subsuming what came before and handling more.
Postel's Law and the design of durable systems
Jon Postel, one of the architects of the early internet, encoded a related principle in RFC 761 (1980) for the Transmission Control Protocol: "Be conservative in what you send, be liberal in what you accept." This became known as Postel's Law or the Robustness Principle, and it's one of the reasons the internet scaled from a research network to a global infrastructure.
The principle captures the asymmetry at the heart of backwards compatibility. Your outputs — the schemas you produce, the claims you make, the behaviors you exhibit — should be precise and well-formed. But your inputs — the range of situations, formats, and edge cases you can handle — should be broad, tolerant, and forgiving.
Applied to schema evolution: your new mental model should generate sharper, more precise predictions than the old one (conservative in what you send). But it should also accept and correctly handle every input the old model could process (liberal in what you accept). If your previous worldview could explain situations A through M, and your new worldview handles N through Z but fumbles cases D and G, you've violated Postel's Law. Your schema is less robust, not more.
This principle has critics. Martin Thomson and David Schinazi argued in 2023 that excessive tolerance can entrench flawed behaviors — if you accept malformed inputs, malformed inputs become the de facto standard. That's a fair concern in protocol design. But in cognitive schema evolution, the core logic holds: you don't get to lose capability when you upgrade. A new schema that drops old cases isn't an upgrade. It's a trade.
Piaget's accommodation: how minds actually do this
Jean Piaget described two mechanisms by which cognitive schemas evolve: assimilation and accommodation. Assimilation integrates new information into existing schemas without changing them — you encounter a new dog breed and file it under your existing "dog" concept. Accommodation restructures the schema itself when new information can't fit — you encounter a platypus and have to revise your categories for mammals.
The critical insight is that accommodation doesn't discard the old schema. It reshapes it. After accommodating the platypus, your revised schema for mammals still correctly classifies every animal it classified before. Dogs are still mammals. Cats are still mammals. The schema expanded to include a new case while retaining coverage of all previous cases.
Piaget's framework shows that backwards compatibility isn't an artificial constraint imposed from outside. It's how functional cognitive development actually works. Children who accommodate new information while retaining old competencies develop robust, flexible thinking. Children whose new schemas drop old capabilities (which happens in certain developmental disruptions) show cognitive regression, not growth.
This maps directly to adult schema evolution. When you revise a core belief — about how relationships work, how career success happens, what constitutes good leadership — the revision process must preserve what the old belief got right. The old schema wasn't entirely wrong. It handled real situations and produced real results. Your job isn't to replace it. Your job is to subsume it.
Windows, Java, and the weight of what already works
Microsoft's approach to backwards compatibility in Windows is legendary. Raymond Chen, a developer on the Windows team since 1992 and author of The Old New Thing, documented how Microsoft's testing team maintained compatibility with applications that used undocumented functions, relied on buggy behavior, or violated API contracts in creative ways. The Windows registry contains an entire AppCompatibility section listing applications that receive special treatment — the operating system emulates old bugs so those applications continue to function.
Chen once wrote: "If any application failed to run on Windows 95, I took it as a personal failure." That's the disposition backwards compatibility demands. Not "the old application should have been written better." Not "users should upgrade." The stance is: the existing ecosystem works, and my upgrade must not break it.
Java took a similar approach. For the most part, Java maintains rigorous backward compatibility — code compiled under Java 5 can still run on Java 21. This isn't free. It constrains what the language designers can do. Deprecated features linger. Suboptimal design decisions persist. But the payoff is that large systems can upgrade to newer versions without rewriting their codebases. The ecosystem's trust in the platform depends on this guarantee.
Joel Spolsky captured the stakes in his essay "How Microsoft Lost the API War" (2004): when you break backward compatibility, you're not just breaking code. You're breaking trust. You're telling everyone who built on your platform that their investment doesn't matter. And people remember that.
Your AI tools have this problem right now
If you use AI systems in your workflow — and you likely do — you're living inside a backwards compatibility challenge. OpenAI's model versioning illustrates the tension: when the model slug gpt-4o is updated to point to a newer snapshot, prompts that produced reliable outputs under the old snapshot may behave differently under the new one. The same system message, the same user input, different results.
OpenAI's recommended practice is to pin model versions — use gpt-4o-2024-08-06 instead of the floating gpt-4o alias — and implement evaluations that catch regressions. This is backwards compatibility as an engineering discipline: before you adopt the new model, verify it handles every case the old model handled correctly.
The broader pattern applies to any AI-augmented cognitive workflow. If you've built prompts, templates, or processes around a particular AI model's behavior, upgrading that model is a schema evolution event. The new model might be more capable overall, but if it fails on cases the old model handled, you've introduced regressions into your thinking infrastructure.
Azure OpenAI's 2025 shift to v1 APIs — where api-version is no longer a required parameter — represents an attempt to solve this at the platform level: provide continuous access to the latest capabilities while maintaining stability. But the fundamental tension remains. Every upgrade must be tested against what already works.
This is why the backwards compatibility mindset matters for your personal epistemic infrastructure. Your notes, your decision frameworks, your mental models — these are your cognitive API. When you evolve them, run the compatibility check. Does the new version handle everything the old version handled?
The protocol: how to make your schemas backwards compatible
Here's the concrete practice. Before adopting any schema update — any revised belief, any new mental model, any upgraded framework for understanding — run this protocol:
1. Inventory the old schema's successes. List at least five specific situations where your previous schema produced accurate predictions or good decisions. These are your compatibility test cases. Don't focus on where it failed — you already know that, which is why you're updating. Focus on where it worked.
2. Test the new schema against every success case. For each situation you listed, ask: does my new schema also produce an accurate prediction here? If your old belief "most people act in self-interest" correctly predicted a colleague's behavior in five specific situations, your new belief must also explain those five situations — not just the anomaly that triggered the revision.
3. If the new schema drops a case, revise it. A new schema that fails a compatibility test isn't ready for deployment. It needs more work. Often this means making it more nuanced — not "people act in self-interest" and not "people are fundamentally altruistic," but "people act in self-interest under conditions X, and act altruistically under conditions Y," where both the old successes and the new anomaly are covered.
4. Preserve, don't erase. Keep a record of your old schema and why it worked. In software, deprecated APIs remain available during a transition period. In your cognitive system, old models should be archived, not deleted. They contain information about what works under specific conditions — information your new schema should encode, not discard.
5. Deploy incrementally. Don't swap your entire worldview overnight. Test the new schema in low-stakes situations first. Monitor for regressions. Only extend it to high-stakes domains after it's proven backwards compatible in controlled conditions.
This protocol turns belief revision from an emotional event ("I was wrong, now I see the truth") into an engineering discipline ("I'm shipping an upgrade that must pass all existing tests plus handle new cases"). The first framing invites binary thinking — old belief bad, new belief good. The second framing demands rigor.
The emotional cost of compatibility — and what comes next
Here's the part most people skip. Backwards compatibility is harder than starting fresh. It's harder to build a theory that subsumes Newtonian mechanics than to build one that ignores it. It's harder to write code that handles legacy formats than to design for a clean slate. It's harder to revise a belief while preserving what it got right than to swing from one extreme to another.
This is why schema evolution requires emotional tolerance — which is exactly where L-0310 picks up. The discomfort you feel when holding both old and new models in tension, when you refuse to simplify by discarding what used to work, when you insist on the harder path of subsumption rather than replacement — that discomfort is the cost of rigorous thinking.
Your old schemas weren't failures. They were earlier versions that handled real situations in the real world. Your new schemas don't replace them. They must contain them. Every case the old schema got right is a test the new schema must pass. That's not a burden. That's the standard that separates genuine cognitive growth from intellectual fashion.