Building a Second Brain for AI Agents
How to architect an Obsidian vault with typed frontmatter so AI agents can reason with your knowledge, not just search it.
Building a Second Brain for AI Agents
You have a knowledge system. Maybe it's an Obsidian vault, a Notion workspace, or a collection of markdown files you've curated over years. It works for you — you know where things are, you can find what you need, and it reflects how you think.
Then you try to use it with an AI agent. You copy notes into ChatGPT. You paste context into Claude. Maybe you set up a RAG plugin that embeds your notes into a vector database. And it... kind of works. The AI can find relevant snippets. But it doesn't get your system. It retrieves information without understanding how your knowledge connects, what you've decided and why, or how you actually make decisions.
The problem isn't retrieval. It's structure.
Your notes are organized for human browsing — folders, tags, search. AI agents don't browse. They need to traverse a decision framework: what do you believe, how do you operate, what have you decided, and why. That requires a different kind of architecture — one designed for agent consumption from the ground up.
What follows is the system I use daily: an Obsidian vault with typed frontmatter that AI agents can traverse like a decision framework. I'll show you the architecture, the schema, and how to start building your own in under an hour.
Why Structure Beats Retrieval for AI Agents
Most approaches to "AI + personal knowledge" fall into the same category: retrieval. RAG pipelines embed your notes into vectors, then find the chunks most similar to a query. Fine-tuning bakes your writing style into a model. Both treat your knowledge as something to search through.
But here's the distinction that changes everything: RAG answers "what do I know about X?" Structured knowledge answers "what should I do about X?"
Consider a concrete example. You have a belief that shapes every technical decision you make: "Working systems beat beautiful architecture." In a traditional note-taking system, that belief lives in a document somewhere — maybe a journal entry, maybe a project retrospective. A RAG system might surface it when you ask about architecture trade-offs. Might not. It depends on embedding similarity, chunk boundaries, and query phrasing.
In a structured knowledge system, that belief is typed as an axiom — a foundational truth with explicit metadata:
---
type: axiom
title: Working Systems Over Beautiful Architecture
status: active
domain: engineering
---
An agent reading this doesn't need to calculate vector similarity. It knows this is a foundational belief (not an opinion, not a note, not a meeting summary). It knows the belief is active (not deprecated, not speculative). And when it encounters a decision about whether to refactor a working system, it can traverse the hierarchy: this axiom exists, principles derive from it, rules constrain it. The agent reasons with your knowledge instead of searching through it.
This matters because AI agents — the kind that take actions, coordinate with other agents, and operate with autonomy — need more than information retrieval. They need context about how you think. Anthropic's research on context engineering makes the case directly: the quality of an agent's output is bounded by the quality of its context. Your knowledge vault IS your context engineering layer — the place where you shape what the agent knows before it ever starts working.
Tiago Forte's "Building a Second Brain" popularized personal knowledge management. His PARA method (Projects, Areas, Resources, Archives) is elegant for human retrieval — organize by actionability, find things when you need them. But PARA tells an agent nothing about the type of knowledge it's reading. A project note and a foundational belief look the same. A decision rationale and a meeting summary are indistinguishable.
Agent-native knowledge management solves this by making the knowledge type explicit. Instead of organizing by project or area, you organize by what kind of knowledge this is: axioms (what you believe), principles (how you behave), decisions (what you've chosen and why), rules (what you must or must not do). The structure itself becomes the context.
Vector databases are complementary here, not competitive. They handle similarity — "find me things related to X." Typed frontmatter handles hierarchy — "what do we believe about X, what principles follow, and what have we decided?" Both are useful. But if you had to pick one for an AI agent that needs to make decisions on your behalf, structure wins every time.
The Typed Frontmatter Schema
The bridge between your knowledge and an AI agent is frontmatter — the YAML metadata block at the top of every markdown file. If you've used Obsidian or any static site generator, you've seen it. But most people use frontmatter for display metadata: title, date, tags. For agent consumption, frontmatter becomes an API layer.
Every note in an agent-native vault gets a type field. Types form a hierarchy that mirrors how humans actually make decisions, made explicit for machines:
- Axiom — A foundational truth you accept. ("My edge is system architecture, not machine learning.")
- Principle — A behavior derived from axioms. ("Target people who have the capability and time to implement.")
- Decision — A choice made at a point in time with rationale. ("Defer paid SEO tooling until 3 articles are published.")
- Rule — A hard constraint derived from principles or decisions. ("Every document must contain exactly one idea.")
This hierarchy isn't arbitrary taxonomy. It's how you already reason — you just haven't made it explicit. When you decide to reject a feature request, you're implicitly traversing: "I believe in scope discipline (axiom) → I should push back on scope creep (principle) → I decided to freeze the feature set for this sprint (decision) → No new features until the freeze lifts (rule)." Making this chain explicit means an agent can traverse it too.
Here's what this looks like in practice. A real axiom from a working vault:
---
type: axiom
title: Architecture Not ML
domain: identity
statement: My edge is system architecture, not machine learning
confidence: proven
---
The body of the note explains why this is axiomatic — what it means, what it implies, what it does NOT mean. Then a principle derived from it:
---
type: principle
title: Target Implementers
domain: selection
statement: Specifically target people who have the capability and time to implement
derived_from:
- axiom/identity/teach-to-fish
- axiom/fit/teaching-requires-implementers
---
Notice the derived_from field. This is the relationship that lets agents traverse up the hierarchy — "why does this principle exist?" — and down — "what principles derive from this axiom?" The agent doesn't need to infer relationships from document content. The relationships are structured data.
Think of this as designing a REST API for your knowledge. Just as an API structures data for programmatic consumption — with endpoints, schemas, and typed responses — typed frontmatter structures knowledge for agent consumption. The agent knows what it's reading, where it sits in the hierarchy, and how it connects to other knowledge.
You don't need a complex schema to start. Three types (axiom, principle, decision) with three required fields (type, status, title) and one relationship field (derived_from) gets you 80% of the value. The schema grows as your system grows.
Agent-Native Vault Architecture
Frontmatter makes individual notes agent-consumable. Vault architecture makes the system agent-navigable.
The key design principle: organize by type, not by topic. Most knowledge systems use topic-based or project-based folders: marketing/, engineering/, q1-planning/. These make sense for humans who think in domains. But an agent scanning your vault needs to know what kind of knowledge it's reading before it knows how to use it.
A type-first folder structure:
vault/
├── axiom/ # Foundational truths
│ ├── identity/ # Who you are
│ ├── market/ # What's true about your market
│ └── constraint/ # Non-negotiable boundaries
├── principle/ # Derived behaviors
│ └── selection/ # How you choose customers/work
├── decision/ # Choices with rationale
├── rule/ # Hard constraints
├── persona/ # User archetypes (JTBD-based)
└── CLAUDE.md # Agent interface file
The folder IS the type. An agent listing the contents of axiom/ instantly knows every file is a foundational belief. It doesn't need to parse content or check metadata — the location tells it what it's reading. Subfolders provide domain context without breaking the type-first pattern.
The most underrated file in this architecture is CLAUDE.md — the agent interface document. If typed frontmatter is the API schema, CLAUDE.md is the API documentation. It tells agents:
- What the type hierarchy means and how to traverse it
- What frontmatter fields exist and what they represent
- How to navigate from high-level axioms to specific rules
- What quality standards apply (one idea per file, prose over properties)
Here's a snippet from a working CLAUDE.md:
## Type Hierarchy
Work from highest leverage to downstream:
1. **Axiom** → Foundational truths we accept
2. **Principle** → Behaviors derived from axioms
3. **Decision** → Choices made with rationale
4. **Rule** → Constraints derived from decisions/principles
## When Working in This Vault
- Every document has a `type` field in frontmatter
- Links use [[wikilinks]] with contextual prose
- One idea per file — atomic notes only
This file is what turns a collection of typed markdown files into a system an agent can navigate. Without it, the agent has structure but no instructions. With it, the agent understands the architecture the same way a new team member would after reading the team wiki.
The folder structure and CLAUDE.md give agents the map. Three design principles ensure the territory is worth mapping:
-
Atomic notes. One idea per file. An axiom note doesn't also contain the principles derived from it — those are separate files with
derived_fromlinks. This means agents can read exactly the knowledge they need without parsing irrelevant context. -
Prose over properties. Relationships get explained in the document body, not just linked in frontmatter.
derived_from: [[axiom/teach-to-fish]]is the machine-readable link; the body paragraph explains why this principle follows from that axiom. Agents get both: structured traversal AND natural language reasoning. -
Type hierarchy as navigation. When an agent needs to make a decision, it starts at the top: what axioms are relevant? What principles derive? What decisions have been made? What rules apply? This top-down traversal is only possible because the architecture encodes the hierarchy explicitly.
Start With Three Types
If this feels like a lot of upfront investment, it isn't — because you don't build the whole system before using it. You start with three types and grow from real work.
Step 1: Create the structure.
Three folders: axiom/, principle/, decision/. One file: CLAUDE.md that explains the hierarchy. That's your vault.
Step 2: Write your first axiom.
Pick one foundational belief about how you work. Something you've never questioned because it's so fundamental. "I build systems, not features." "Shipping beats perfection." "Autonomy over dependency."
Write it as a markdown file with typed frontmatter:
---
type: axiom
title: Shipping Beats Perfection
status: active
domain: engineering
statement: A working system in production is worth more than a perfect system in development
---
Add a paragraph explaining why you hold this belief and what it implies. This is now agent-readable foundational context.
Step 3: Derive a principle.
From your axiom, what behavior follows? If "shipping beats perfection," then maybe: "Set time-boxes on technical decisions. When the timer runs out, ship what works."
---
type: principle
title: Time-Box Technical Decisions
status: active
derived_from:
- axiom/shipping-beats-perfection
statement: Set a time limit on technical decisions and ship when it expires
---
Notice the derived_from field linking back to the axiom. An agent can now traverse: axiom → principle → "what does this person believe about shipping, and how does that translate into behavior?"
Step 4: Capture a decision.
The next time you make a meaningful choice — which technology to use, which feature to cut, whether to invest in tooling — capture it:
---
type: decision
title: Use Markdown Over Database for Knowledge Storage
status: active
rationale: Markdown files are portable, version-controllable via Git, and readable by any AI agent without special tooling
---
Decisions are the knowledge agents need most and humans document least. The rationale field is the most valuable thing in your vault — it tells an agent not just what you chose, but why. And because everything is markdown tracked in Git, you get full version history — you can see how your decisions evolved, which beliefs were revised, and what the system looked like at any point in time.
Step 5: Point an agent at it.
Open Claude Code in your vault directory. Ask it to read your CLAUDE.md and summarize what it knows about your decision-making framework. The difference between "here are some notes" and "here is a structured knowledge system with axioms, principles, and decisions" is immediately apparent in the quality of the agent's responses.
You now have a functioning second brain for AI agents. It has three notes, one instruction file, and a clear architecture. Everything else — rules, personas, objectives, key results — grows organically as you capture real decisions and derive real principles from your actual work. The system compounds: every axiom you articulate makes every future agent interaction more grounded. Every decision you document means the agent doesn't need you to re-explain your reasoning.
Your AI Second Brain Starts Here
Here's the graduation test: can you show someone your structured knowledge system and explain how agents consume it differently than you do?
If you can describe the difference between a note organized for human retrieval and a note structured for agent traversal — if you can explain why an axiom is different from a principle and why that distinction matters to a machine reading your vault — you've built the foundation that everything else rests on.
This is your second brain for AI agents: not a vector database full of embeddings, not a fine-tuned model snapshot, but a living knowledge architecture that agents can traverse, reason with, and act on. The vault grows as you work. Every axiom you articulate makes future agent interactions more grounded. Every decision you document is one less thing you need to re-explain.
The next article in this series — AI-Augmented Backlog Refinement — shows what happens when you point an AI agent at this system during a real product workflow: turning backlog refinement from a meeting that produces vague tickets into a structured conversation that produces implementation-ready work. The knowledge architecture you built here makes that possible. Without typed context, the agent is guessing. With it, the agent is reasoning.
Get the Obsidian Template Vault + PO Agent Blueprint
Enter your email and we'll send you the download link.
Related Reading
AI-Augmented Backlog Refinement: How I Run Every Session with a PO Agent
How to configure Claude Code as a PO agent with session types and run AI-augmented backlog refinement that challenges your assumptions, not just generates boilerplate.
Context Engineering Principles: The Mental Models I Use Every Day with AI Agents
Named principles for context engineering — less is more, knowledge gardening, structure over retrieval — discovered through daily agent usage, not academic theory.
Ready to accelerate?
Book a strategy call to discuss how these patterns apply to your product team.
Book a Strategy Call