Your JavaScript knowledge expires; your calculus does not
Your understanding of the latest front-end framework goes stale in six months. Your knowledge of calculus lasts decades. Your read on team dynamics at work might need revision after a single reorg. Your understanding of how gravity works will serve you until you die.
This is not a curiosity. It is a fundamental structural feature of knowledge that most people ignore entirely. They treat schema maintenance as a single-speed activity — either updating everything compulsively or updating nothing until forced. Both approaches are wrong, and the error is the same: failing to recognize that different domains of knowledge change at fundamentally different rates, and your schema evolution cadence must match those rates.
L-0312 established that anomalies are signals telling you a schema needs updating. But anomalies arrive at different frequencies in different domains. In cybersecurity, anomalies appear weekly. In geometry, they appear across centuries. This lesson is about calibrating your response — building an evolution practice that allocates your finite revision energy in proportion to each domain's actual rate of change.
Pace layering: the architecture of differential change
The most powerful framework for understanding why different domains change at different speeds comes from Stewart Brand. In The Clock of the Long Now (1999), Brand proposed the concept of pace layers — the idea that complex systems maintain stability precisely because their components change at different rates. He identified six layers of civilization, arranged from fastest to slowest: fashion, commerce, infrastructure, governance, culture, and nature.
Fashion changes in weeks. Commerce shifts in months to years. Infrastructure evolves over decades. Governance transforms across generations. Culture moves across centuries. Nature operates on geological timescales. The critical insight is not just that these layers differ in speed, but that the system's health depends on maintaining those differences. As Brand wrote in his 2018 essay for the MIT Press Journal of Design and Science, "The fast layers innovate; the slow layers stabilize. The whole combines learning with continuity."
Brand credited architect Frank Duffy's concept of "shearing layers" in buildings, which Brand expanded in How Buildings Learn (1994). Duffy observed that a building's components — site, structure, skin, services, space plan, and stuff — each have different lifespans. The site outlasts the structure. The structure outlasts the services. The furniture moves monthly while the foundation stands for centuries. A building that tries to make all its layers change at the same speed is either unstable (constant renovations to the foundation) or stagnant (never rearranging the furniture).
Your cognitive infrastructure works the same way. Your schemas operate at different layers, and each layer has an appropriate pace of change. Schemas about social media algorithms sit at the fashion layer — they must evolve rapidly or become useless. Schemas about human psychology sit at the culture layer — they change, but slowly, across decades of accumulated research. Schemas about mathematical logic sit at the nature layer — they are effectively permanent within a human lifetime. Treating a fashion-layer schema with culture-layer stability means operating on outdated knowledge. Treating a culture-layer schema with fashion-layer volatility means never building the deep, stable understanding that coherent thinking requires.
Cynefin and the complexity dimension
Speed of change is one axis. The type of change is another. Dave Snowden's Cynefin framework, developed in 1999 at IBM Global Services, offers a complementary lens. Cynefin (a Welsh word for "habitat") identifies five domains in which decisions operate: clear, complicated, complex, chaotic, and disorder.
In clear domains, the relationship between cause and effect is obvious. Best practices exist. Your schemas here can be relatively stable because the underlying dynamics are well-understood and predictable — think of schemas about basic accounting principles or physical safety rules. In complicated domains, cause and effect exist but require expert analysis to identify. Your schemas here need periodic revision as expert consensus evolves — think of schemas about tax optimization or medical treatment protocols. In complex domains, cause and effect can only be deduced in retrospect. Your schemas here must evolve continuously because the system you are modeling is itself emergent and unpredictable — think of schemas about organizational culture, market behavior, or parenting strategies. In chaotic domains, there is no discernible relationship between cause and effect. Your schemas here are not just fast-changing; they are provisional by nature — rough heuristics that help you act until the chaos resolves into something more structured.
The Cynefin insight for schema evolution is this: the appropriate update cadence is not just about how fast the domain changes but about how knowable the domain is. A complex domain does not just change quickly. It changes unpredictably. You cannot schedule schema updates for complex domains the way you schedule them for clear or complicated ones. Instead, you need continuous sensing — the kind of ongoing anomaly detection that L-0312 described — combined with a willingness to revise models frequently and provisionally. In a complex domain, your schema is always a hypothesis being tested, never a conclusion being preserved.
Quine's web of belief: core versus peripheral change rates
The philosopher W. V. O. Quine provided yet another dimension of this insight. In "Two Dogmas of Empiricism" (1951), Quine argued that our beliefs form an interconnected web where some beliefs sit at the periphery and others sit at the core. Peripheral beliefs — specific empirical claims about the world — are easily revised when evidence challenges them. Core beliefs — fundamental logical and mathematical principles, deep metaphysical commitments — resist revision because altering them would have cascading consequences throughout the entire web.
This is not stubbornness. It is structural necessity. If you changed your belief in basic logical consistency every time you encountered a paradox, your entire knowledge system would collapse. Core beliefs change slowly because they must — they are load-bearing elements of your cognitive architecture, and revising them requires revising everything that depends on them.
Quine's framework maps directly onto schema evolution pace. Your peripheral schemas — specific beliefs about specific situations — should evolve rapidly in response to evidence. Your core schemas — foundational principles about reasoning, ethics, and reality — should evolve slowly and only under significant evidential pressure. This is not because core schemas are sacred or infallible. It is because the cost of revising them is high (every dependent schema must be re-evaluated), so the evidential threshold for revision should be proportionally high.
The practical implication is that evolution pace is partly determined by a schema's position in your dependency graph. A schema that nothing depends on can be revised freely. A schema that hundreds of other schemas depend on requires careful, deliberate, well-evidenced revision. The deeper the schema sits in your web of belief, the slower its appropriate evolution pace — not because deep beliefs are more likely to be true, but because revising them is more consequential.
Technology as pace-layer exemplar
The technology domain provides the clearest examples of differential evolution rates because its layers are so visible. Moore's Law — transistor density doubling every two years — has held for over fifty years. But researchers studying 62 different technologies found that while exponential improvement patterns appear across many domains, doubling times range from two years (semiconductors) to hundreds of years (energy technologies), with annual improvement rates spanning 3 to 65 percent. One analysis comparing technology evolution to biological evolution found that genetic complexity doubles approximately every 376 million years — the same exponential pattern, but at a timescale ratio of roughly 188 million to one.
For a knowledge worker, this means your technology schemas operate at wildly different speeds. Your schema about "how to write effective code" evolves slowly — the principles of clarity and modularity have remained stable for decades. Your schema about "which framework to use" evolves fast — the landscape reshuffles every few years. The mistake is letting the fastest-changing layer set the pace for all layers. Developers who chase every new framework neglect stable fundamentals. Developers who only study fundamentals fall behind in practical capability. The solution is maintaining different schemas at different speeds, with the wisdom to tell which is which.
Organizational planning cadences: a structural parallel
Organizations have long recognized this principle and built explicit structures around it. Strategic planning operates on a 3-to-10-year horizon, reviewed annually. Tactical planning translates strategy into 6-to-24-month programs, reviewed quarterly. Operational planning manages day-to-day execution, reviewed weekly or daily. These intervals are not arbitrary. They reflect the actual rate of change at each level. An organization that reviewed its mission weekly would suffer from strategic whiplash. One that reviewed its operational metrics annually would be flying blind.
Your personal schema infrastructure benefits from the same tiered approach. Operational schemas (daily workflow, tool selection) deserve weekly or monthly review. Tactical schemas (career strategy, skill development direction) deserve quarterly review. Strategic schemas (core values, life purpose) deserve annual review at most, and only with significant deliberation.
AI and the Third Brain: retraining cadences as domain signatures
Machine learning systems make the domain-dependent nature of evolution pace computationally explicit. Different AI applications require fundamentally different retraining frequencies, and the differences map directly to the pace-layering principle.
Fraud detection models require near-continuous retraining. Attack vectors shift constantly, and a model trained on last month's fraud patterns will miss this month's novel approaches. Practitioners in the field describe continuous feedback loops and automated retraining pipelines designed to keep models current with emerging threats. The domain's intrinsic rate of change demands it.
Recommendation systems operate at a medium cadence. User preferences and purchasing patterns shift over weeks to months, and major platforms implement regular retraining cycles — often daily or weekly — to keep recommendations relevant. But the underlying architecture of how recommendations work changes much more slowly.
Language models, by contrast, can operate with relatively infrequent full retraining — major updates happen annually or less — because the structure of language itself changes slowly. Innovations like retrieval-augmented generation (RAG) allow language models to incorporate new information without full retraining, effectively separating the fast-changing layer (specific facts) from the slow-changing layer (language understanding).
The concept of model drift makes this explicit. Data drift occurs when the statistical distribution of input data changes over time. Concept drift occurs when the fundamental relationship between inputs and outputs changes. Both are domain-dependent: a model predicting stock prices faces rapid concept drift as market dynamics shift, while a model classifying geological formations faces virtually none. The retraining cadence must match the drift rate, and the drift rate is a property of the domain, not of the model.
For your Third Brain — the AI-augmented layer of your knowledge infrastructure — this means different tools and knowledge bases need different refresh rates. Your AI-assisted market research pipeline might need weekly updates. Your AI-supported writing tools might need monthly prompt refinements. Your AI-organized personal knowledge base might need quarterly restructuring. Applying a single maintenance cadence to all AI-augmented schemas is the same structural error as applying a single cadence to all personal schemas: it mismatches effort to actual need.
Protocol: the domain-speed audit
This protocol gives you a systematic method for assigning appropriate evolution cadences to your schemas based on domain analysis.
Step 1: Inventory your active schemas. List fifteen to twenty mental models, beliefs, and frameworks you rely on regularly, spanning work, relationships, health, finances, technology, and ethics.
Step 2: Identify each schema's domain layer. Place each schema on the pace-layer spectrum: fashion (trends, social dynamics), commerce (career tactics, market knowledge), infrastructure (industry fundamentals, methodologies), governance (organizational principles, ethical frameworks), culture (worldview, deep values), or nature (logical principles, physical laws).
Step 3: Assess the Cynefin domain. Is the domain clear, complicated, complex, or chaotic? Schemas in complex and chaotic domains need faster cadences than their pace layer alone suggests, because unpredictability generates more frequent anomalies.
Step 4: Check dependency depth. How many other schemas depend on this one? Peripheral schemas can be revised freely. High-dependency schemas require slower, more deliberate revision because updating them has cascading consequences.
Step 5: Assign a review cadence. Based on pace layer, Cynefin domain, and dependency depth, assign each schema to a tier:
- Weekly to monthly: Fashion-layer and commerce-layer schemas in complex or chaotic domains with low dependency depth. Examples: current market conditions, team dynamics, technology tool selection.
- Quarterly: Commerce-layer and infrastructure-layer schemas in complicated domains with moderate dependency depth. Examples: career strategy, industry trends, relationship investment priorities.
- Annually: Infrastructure-layer and governance-layer schemas in clear or complicated domains with high dependency depth. Examples: professional methodology, ethical principles, political philosophy.
- On anomaly only: Culture-layer and nature-layer schemas with very high dependency depth. Examples: core logical frameworks, fundamental values, deep metaphysical commitments. These are revised only when persistent, well-evidenced anomalies (L-0312) demand reconsideration.
Step 6: Schedule the reviews. Put the cadences on your actual calendar. Fast-domain schemas get a monthly review trigger. Medium-domain schemas get a quarterly checkpoint. Slow-domain schemas get an annual audit. Foundational schemas get reviewed only when you explicitly flag an anomaly. The point is not to create bureaucratic overhead but to ensure that your schema-maintenance energy flows to where reality is actually changing.
Toward community schemas
You now have a framework for calibrating your personal schema evolution pace to domain realities. But there is a dimension this lesson has not yet addressed: what happens when schemas are shared? When a mental model is not just yours but belongs to a team, a community, or a culture?
Shared schemas introduce a new variable into evolution pace — coordination cost. When you revise a personal schema, you need only update your own thinking. When you revise a shared schema, every person who holds that schema must also update, and the coordination required to achieve that collective revision creates structural drag. This is why cultural norms change more slowly than individual beliefs, why organizational strategy lags behind market reality, and why scientific paradigms persist long after anomalies have accumulated.
L-0314 takes up this challenge directly. Community schemas evolve slowly not because communities are stupid but because the coordination costs of collective revision are genuinely high. Understanding why — and what that means for your approach to shared knowledge — is the next step in building a mature schema evolution practice.
Sources
- Brand, S. (1999). The Clock of the Long Now: Time and Responsibility. Basic Books.
- Brand, S. (2018). Pace layering: How complex systems learn and keep learning. Journal of Design and Science, MIT Press.
- Brand, S. (1994). How Buildings Learn: What Happens After They're Built. Viking Press.
- Snowden, D. J., & Boone, M. E. (2007). A leader's framework for decision making. Harvard Business Review, 85(11), 68-76.
- Quine, W. V. O. (1951). Two dogmas of empiricism. The Philosophical Review, 60(1), 20-43.
- Quine, W. V. O., & Ullian, J. S. (1978). The Web of Belief (2nd ed.). McGraw-Hill.
- Nagy, B., et al. (2013). Statistical basis for predicting technological progress. PLoS ONE, 8(2), e52669.
- Shafer, G., et al. (2013). Moore's Law and the origin of life. MIT Technology Review.
- Lumenova AI. (2024). Model drift: Detecting, preventing and managing model drift.
- Encord. (2024). What is model drift? Best practices for dealing with model drift.