The Axiom-Principle-Rule Framework: Codifying How You Think for AI Agents
A composable three-tier decision hierarchy — axiom, principle, rule — that codifies your judgment into structured context AI agents can traverse. Distilled from real PO practice, not academic theory.
The Axiom-Principle-Rule Framework: Codifying How You Think for AI Agents
You've got Claude Code running. Your CLAUDE.md gives the agent your product context, your standards, your knowledge vault. The first workflow felt like a breakthrough — the agent referenced your decisions, applied your standards, caught contradictions you'd missed. Context engineering works.
Then last Tuesday, the agent recommended prioritizing a mobile feature. You'd never prioritize mobile right now — it contradicts three months of scope decisions. But the agent doesn't know that. It knows you deferred mobile in Q3 (it found the decision note). It doesn't know you'd ALWAYS deprioritize mobile at this stage because you believe in constraining scope to the core platform until product-market fit is proven. The agent knows what you decided. It doesn't know how you think.
Your decision framework — the foundational beliefs that guide every choice, the principles you apply instinctively, the concrete rules you enforce — lives in your head. When you challenge a feature's scope, you're drawing on years of distilled experience from Scrum, Lean, Theory of Constraints, and hard lessons. The agent can't access what you haven't codified.
This article is about codifying it. A composable three-tier hierarchy — axiom, principle, rule — that structures your judgment into context agents can traverse. Not academic decision theory. Not another framework to learn. A method for extracting the decision logic you already use and making it available to every agent interaction.
Why Product Management Axioms Matter More Than You Think
The agent isn't wrong. It's uninformed. There's a difference.
When you give an agent your product context and your standards, you're giving it knowledge — facts about your product, constraints it should respect, norms it should follow. Knowledge answers "what do I know?" It doesn't answer "how do I weigh what I know?"
That weighing is judgment. And without codified judgment, agents make reasonable but inconsistent decisions. Same agent, same context, same type of task — different recommendations on different days. This isn't randomness. It's the absence of priority. Without a decision hierarchy, every piece of context has equal weight. The agent can't distinguish between a foundational belief you'd never compromise and a tactical preference you'd trade away in the right circumstances.
Ad-hoc instructions make it worse. "Keep scope tight" on Monday. "Be thorough with edge cases" on Thursday. In your head, these aren't contradictory — you know scope-intolerance takes priority until the core feature is validated, then thoroughness kicks in. But the agent sees two equal instructions and tries to satisfy both simultaneously. As one practitioner put it: "50 rules for code editing with 50 rules for searching — the conflicts are insane." The problem isn't the number of rules. It's the absence of hierarchy to resolve them.
Goldratt's insight from Theory of Constraints applies directly: a few constraints determine system behavior. In manufacturing, finding the bottleneck lets you optimize the entire factory by focusing on one constraint. In decision-making, a few foundational truths — axioms — determine how you weigh every other consideration. Codify those few truths, and the rest becomes navigable. The agent doesn't need a hundred rules. It needs three axioms, the principles derived from them, and the rules that implement the principles. That hierarchy gives every instruction a priority, and every conflict a resolution path.
Articles 1 through 5 built the knowledge system: a structured vault, named principles, the shift from prompts to context, and a working Claude Code setup. But knowledge without judgment is like data without analysis. The agent needs to know not just what you know, but how you think about what you know. That's what the axiom-principle-rule framework provides.
The Three-Tier Decision Framework for AI Agents
The hierarchy has three layers, each with a distinct role. The distinction matters — collapsing them into a flat list of "rules" is exactly the ad-hoc pattern that creates conflicts.
Axioms are immutable truths. Foundational beliefs that don't change with context, project, stakeholder, or timeline. "Scope intolerance" means you always challenge scope — full stop. Not "challenge scope when you have time" or "challenge scope on important features." Always. Axioms are rare: three to seven for most practitioners. If you have twenty axioms, most of them are actually principles. The test: would you apply this belief even if the CEO asked you not to? If yes, it's an axiom. If it depends, it's a principle.
Principles are directional guides derived from axioms. They tell you how to apply axiom-level thinking in practice. "Target implementers" is a principle derived from a teaching axiom — it means you prioritize content and tools that practitioners can use immediately, not theoretical frameworks they need to interpret. Principles are more numerous than axioms (ten to twenty) but still limited. Kent Beck's distinction between principles and practices in software design follows the same pattern: principles guide behavior; practices implement it. Our hierarchy adds one more layer.
Rules are enforceable constraints derived from principles. Concrete, specific, testable. "No story enters sprint without acceptance criteria" is a rule. It derives from the target-implementers principle (practitioners need clear criteria) which derives from the scope-intolerance axiom (unconstrained work expands). Rules are the most numerous layer and the most context-specific — you might have different rules for different projects, but the axioms and principles stay the same.
The composable decision framework resolves conflicts by priority: when rules conflict, the principle decides. When principles conflict, the axiom decides. Axioms don't conflict — if they do, one of them isn't actually an axiom. This hierarchy means every instruction the agent receives has a known priority level. "Be thorough with edge cases" (a rule) never overrides "constrain scope to the core platform" (an axiom-derived principle). The agent doesn't need you to resolve the tension — the hierarchy resolves it.
Here's what this looks like in a real vault. From my decision framework:
# vault/axiom/scope-intolerance.md
---
type: axiom
status: active
belief: 'Unconstrained scope is the primary failure mode of product work.
Challenge scope first, always.'
derived_principles:
- target-implementers
- constraint-first-design
---
# vault/principle/target-implementers.md
---
type: principle
parent_axiom: scope-intolerance
status: active
guide: "Prioritize practitioners who will use this immediately.
If it requires interpretation before use, it's not ready."
derived_rules:
- stories-need-acceptance-criteria
- no-research-spikes-without-timebox
---
# vault/rule/stories-need-acceptance-criteria.md
---
type: rule
parent_principle: target-implementers
status: active
constraint: 'No user story enters sprint planning without
acceptance criteria and a scope boundary.'
enforcement: hard
---
Three files. Three layers. The agent reads the rule, traces it to the principle, traces the principle to the axiom. When it encounters a novel situation the rule doesn't cover, it falls back to the principle. When the principle is ambiguous, it falls back to the axiom. This is graceful degradation by design — the system handles edge cases instead of failing silently.
Framework Distillation: Building Your Hierarchy
You don't need to invent axioms from scratch. You already have them — buried inside the frameworks you use every day. The method is extraction, not creation.
Step 1: List your active frameworks. Not the ones you've read about — the ones you actually use. For most POs, that's some combination of Scrum, Lean, Kanban, and whatever hard lessons your career has taught you. Be honest: if you don't practice it, it's not an active framework.
Step 2: For each framework, ask one question. "What foundational truth does this framework assume that I've validated through my own experience?" The answer is a candidate axiom. Not the framework's practices (those are rules). Not its recommendations (those are principles). The foundational truth underneath.
Scrum assumes "work in small batches with frequent feedback." If you believe this regardless of project type, team size, or domain — that's an axiom for you. If it depends on the situation, it's a principle. "Sprints are two weeks" is neither — it's a rule, context-specific and changeable.
Lean assumes "eliminate waste." If you'd apply this in any work context — axiom candidate. "Map the value stream before optimizing" — that's a principle (how to apply the axiom). "Backlog items older than three months get auto-archived" — that's a rule (concrete, testable, context-specific).
Theory of Constraints assumes "the constraint determines throughput." Goldratt's core insight. If you believe this about every system you work with — axiom. "Identify the bottleneck before adding capacity" — principle. "No WIP above five items per person" — rule.
Step 3: Test and assign. For each candidate, apply three filters:
- Immutable? Would you hold this belief regardless of context? → Axiom.
- Directional? Does it guide behavior but allow context-dependent application? → Principle.
- Testable? Can you check compliance objectively? → Rule.
Most candidates land at the principle or rule level. That's correct. Axioms are rare. If your first pass produces more than seven axioms, re-examine — some are likely principles wearing axiom clothing.
The pruning discipline. Not every framework idea earns a place. Start with three axioms. Nassim Taleb's insight about antifragile systems supports this — robustness comes from constraint, not complexity. The fewer the axioms, the more robust the system. Add a fourth only when you discover a foundational truth that isn't covered by the existing three.
Knowledge gardening applies to the framework itself. Axioms are nearly permanent — revisit annually. Principles evolve with experience — review quarterly. Rules are the most volatile — add, modify, retire as context changes. This mirrors the knowledge gardening principle applied to your decision system rather than your knowledge vault.
How Agents Traverse Your Codified Decision Framework
The hierarchy becomes useful when agents can read it. Typed frontmatter — the type:, parent_axiom:, derived_from: fields shown earlier — makes the hierarchy machine-readable. The agent doesn't just see a list of beliefs. It sees a traversable graph with explicit relationships between layers.
Your CLAUDE.md already points to your vault. Add the decision hierarchy: "For decision frameworks, see vault/axiom/, vault/principle/, vault/rule/." The agent reads the hierarchy when making decisions, the same way it reads your product context for grounding.
The practical difference shows up immediately. Without the hierarchy, you ask the agent to refine a user story and it gives reasonable but generic feedback. With the hierarchy, the agent traces from your scope-intolerance axiom through target-implementers to the acceptance criteria rule and challenges the story at each level: "This story doesn't have a scope boundary (rule violation). The acceptance criteria aren't actionable by a practitioner (principle misalignment). And the feature itself expands scope beyond the core platform (axiom challenge)." Three layers of challenge, ordered by priority, from one instruction.
Here's what you may not have noticed: you've already been using this hierarchy. The discrimination test from our guide to context engineering in practice — "Is the task repeatable? Does it build on prior work?" — is a rule. It derives from the principle "persistent context compounds" which derives from a context engineering axiom. The less is more principle is a principle in this framework. The entire Foundation and Augmentation track is structured as axioms (context engineering as a discipline), principles (how to practice it), and rules (specific techniques). You've been walking through this hierarchy for five articles.
Graceful degradation is the final design property. When a rule doesn't cover the situation, the agent checks the principle. When the principle is ambiguous, it checks the axiom. When even the axiom doesn't clearly apply — the agent flags the decision for human judgment rather than guessing. This fallback pattern means the system handles novel situations predictably instead of producing confident-but-wrong output.
Codify Your Decision Making This Week
Can you articulate your decision framework as axioms, principles, and rules — and show how an agent uses each layer? If your agent's recommendations now align with your judgment — not because you told it what to recommend, but because it traversed your decision hierarchy and arrived at the same conclusion you would — the codification is working.
Here's this week's exercise:
- List three frameworks you actually use. Not aspirationally — operationally. What guides your daily decisions?
- Extract one axiom from each. Ask: "What foundational truth does this framework assume that I've validated?" Test: is it immutable, directional, or testable? Assign the layer.
- Write three vault notes. One axiom, one principle derived from it, one rule derived from the principle. Use the typed frontmatter format:
type:,parent_axiom:,derived_from:. - Update your CLAUDE.md. Add pointers to
vault/axiom/,vault/principle/,vault/rule/. - Test it. Run a PO task and compare the agent's output with and without the decision hierarchy. The difference is judgment.
You've codified how you think. One agent uses your framework consistently — applying your judgment, not just your knowledge. But what happens when you need multiple agents working on different parts of a problem? When one agent refines stories while another reviews architecture while a third prepares stakeholder updates? Coordination — making sure agents share context, don't contradict each other, and produce coherent output — is an architecture problem. And the axiom-principle-rule hierarchy you just built is the shared decision brain that makes coordination possible.
Get the Axiom-Principle-Rule Starter Kit + PO Agent Blueprint
Enter your email and we'll send you the download link.
Related Reading
Context Engineering in Practice: How the Shift from Prompting Actually Happens
The practitioner transformation from prompt-first to context-first AI work — with real before/after artifacts, four progressive stages, and a discrimination test for knowing when each approach is right.
Your First Claude Code Workflow: From Context Engineering to Daily Practice
The PO-specific guide to setting up Claude Code with CLAUDE.md as the interface to your knowledge vault — not a developer tutorial, but your first real workflow where context engineering becomes daily muscle memory.
Ready to accelerate?
Book a strategy call to discuss how these patterns apply to your product team.
Book a Strategy Call