Your First Claude Code Workflow: From Context Engineering to Daily Practice
The PO-specific guide to setting up Claude Code with CLAUDE.md as the interface to your knowledge vault — not a developer tutorial, but your first real workflow where context engineering becomes daily muscle memory.
Your First Claude Code Workflow: From Context Engineering to Daily Practice
You understand context engineering. You can name the principles — less is more, knowledge gardening, structure over retrieval. You know the discrimination test: when a task is repeatable, builds on prior work, and requires multi-step reasoning, context engineering beats prompting every time. The mindset shifted. The theory landed.
But when you sit down to work on Monday morning, you open Claude.ai, type a one-off prompt, and start from zero. Again.
The gap isn't knowledge. It's setup. You haven't configured the tool that turns context engineering from something you understand into something you do. Not because it's hard — because nobody showed you the PO-specific version. Every Claude Code tutorial targets developers. Every setup guide walks through code repositories, test commands, and linting rules. The Product Owner's first workflow — where your knowledge vault meets an agent runtime and you experience what persistent context actually does to output quality — doesn't exist yet.
This is that article. Fifteen minutes of setup. One real PO task. The compound return that makes the shift irreversible.
Claude Code for Product Owners: It's Not What You Think
The name is misleading. "Claude Code" sounds like a coding tool — GitHub Copilot's smarter cousin. Most articles about Claude Code for product people reinforce this by showing PMs writing code, building prototypes, or generating PRDs from scratch. That's one use case. It's not the use case.
Claude Code is an agent runtime. It has a file system it can read and write to. It has persistent memory through CLAUDE.md — a markdown file it reads automatically at the start of every session. It has extensible tools through slash commands and MCP servers. It has session management — the ability to maintain, compact, or clear context across interactions. Boris Cherny, who created Claude Code at Anthropic, describes his workflow as plan, execute, verify — a pattern that applies to any professional domain, not just software engineering. Plan mode lets you explore and design before the agent takes action. Model selection (Sonnet for speed, Opus for depth) lets you match capability to task complexity. These aren't developer features — they're workflow design choices that any PO can apply.
What makes this matter for Product Owners specifically? Apply the discrimination test from our guide to context engineering in practice. PO work is repeatable (refinement happens every sprint). It builds on prior work (decisions compound, constraints persist, standards evolve). It's multi-step (refine against Definition of Ready, check architectural constraints, validate scope against capacity). And it requires shared context (your team needs the same standards, the same decisions, the same constraints you're working from).
Four out of four criteria. Product ownership is context engineering territory.
The counter-argument lands immediately: "I can do this in ChatGPT or Claude.ai." You can. But without persistent context, every session starts from zero. You rewrite your product context, your standards, your constraints — the exact treadmill from the previous article. CLAUDE.md is the difference between a chat window and a configured agent. Same underlying model. Fundamentally different capability. Carl Vellotti's ccforpms.com course covers Claude Code for product managers broadly — modules on meeting notes, competitive analysis, PRDs. What no existing resource covers is the PO-specific first workflow where your structured knowledge vault connects to an agent runtime through CLAUDE.md. That's the gap this article fills.
Your First Claude Code CLAUDE.md Setup
The setup that matters takes fifteen minutes. Not because I'm oversimplifying — because the minimum viable configuration is genuinely small. Install Claude Code. Create a CLAUDE.md file. Run your first session. No MCP servers, no slash commands, no hooks, no sub-agents. Those are powerful features that you'll add later. Right now, the goal is one thing: experience what happens when an agent has persistent access to your knowledge.
Here's what goes in your CLAUDE.md. Not everything — focused context that applies to every session. Apply less is more from the start.
Category 1: What this product does (2-3 sentences). Your product's core purpose, current stage, and primary user. Not a PRD — a mental model the agent can use to ground every response.
Category 2: What standards apply (5-10 lines). Your Definition of Ready. Your team's refinement norms. The architectural constraints the agent needs to respect. If it doesn't change week-to-week, it belongs here.
Category 3: What the agent should reference (5-10 lines). Pointers to your vault notes — not the notes themselves. "For product axioms, see vault/axiom/. For active decisions, see vault/decision/. For architectural constraints, see vault/architecture/." This is progressive disclosure: the agent reads CLAUDE.md always, reads vault notes when the task requires them.
Fifteen to thirty lines total. That's your CLAUDE.md.
The architecture matters: CLAUDE.md doesn't contain your knowledge — it interfaces with it. Your knowledge lives in the vault you've built (or are building). CLAUDE.md is the bridge layer that tells the agent where to find what it needs and what standards to apply. Anthropic's official documentation covers CLAUDE.md mechanics — creation, hierarchy, syntax. What they don't cover is using CLAUDE.md as the access layer to a structured knowledge system. That's the context engineering application: a persistent, structured, maintained interface between your knowledge and your agent.
Here's a real example from my setup (sanitized):
# Product Context
Completions.io is a learning platform for technical product owners
building AI-augmented product practices. Stage: pre-launch MVP.
Target: POs and senior PMs who've used AI tools but not AI systems.
# Standards
- Definition of Ready: user story has persona, JTBD, acceptance criteria,
technical constraints, and scope boundary
- Refinement: challenge scope first, then acceptance criteria, then edge cases
- Writing: practitioner voice, concrete examples, no jargon without definition
# Knowledge References
- Product axioms: vault/axiom/ (core beliefs, non-negotiable)
- Active decisions: vault/decision/ (current, may have expiry dates)
- Principles: vault/principle/ (decision heuristics)
- Team rules: vault/rule/ (specific, enforceable standards)
That's twenty lines. Every Claude Code session I start reads this automatically. No prompt engineering. No rewriting. No forgetting a constraint.
Running Your First Claude Code Workflow
Setup without a first run is just installation. The shift happens when you use it.
Pick a task you did this week. Backlog refinement is ideal — it was the core example in our guide to AI-augmented backlog refinement, and it hits all four criteria on the discrimination test. Brief writing, scope review, or stakeholder prep also work. The requirement: it's something you'd normally do in Claude.ai with a long prompt.
Start Claude Code in your project directory. It reads CLAUDE.md automatically — you'll see a brief note confirming it loaded your project context. Now give it a short instruction. Not a 500-word prompt with all your context pasted in. Something like:
"Refine these three user stories against our Definition of Ready. Challenge scope first."
That's it. Twenty words. Watch what happens.
The agent pulls your Definition of Ready from the standards in CLAUDE.md. It references your product axioms from the vault when challenging scope. It checks recent decisions to ensure the stories don't contradict something you've already decided. It applies your team's refinement norms — scope first, then acceptance criteria, then edge cases — because those norms are in the persistent context, not in your prompt.
The output is different. Not just faster — qualitatively different. In one of my sessions, the agent flagged a story that assumed a feature we'd explicitly deferred in decision/defer-mobile-q3.md. With a chat prompt, that decision would have been buried in a paragraph I sometimes remembered to include. With CLAUDE.md pointing to my decisions directory, the agent found it automatically. The difference isn't "same result, less effort." It's a different kind of interaction — the agent reasons about your knowledge instead of just consuming your prompt.
Now run the same task in Claude.ai for comparison. Write the full prompt — paste your product context, your Definition of Ready, your constraints, the stories. Compare the outputs side by side. The chat version gives you reasonable, generic feedback. The CLAUDE.md version gives you grounded, context-specific challenges that reference your actual decisions and standards. That's the compound return. That's the moment the treadmill stops.
If the first run doesn't feel dramatically different, check your CLAUDE.md. Is the context focused or diluted? Does it point to the right vault notes? Is the knowledge current? You're diagnosing a context problem, not a prompt problem — and that instinct is itself evidence that the shift from prompts to context engineering is taking hold.
Claude Code Session Management for Product Owners
You've run your first workflow. Now you need to manage the sessions that follow.
Session management is knowledge gardening applied to the agent runtime. Just as your vault notes need maintenance to stay current, your Claude Code sessions need intentional management to stay effective. The patterns here are universal — they transfer to any agent tool, not just Claude Code. This is patterns-over-tools in action.
Three session patterns cover most PO work:
Deep work sessions. Refinement, brief writing, scope negotiation, architecture review. These are long-context tasks where the agent accumulates understanding as you work. Don't compact mid-task — you lose nuance the agent has built up during the conversation. Start fresh only when you've completed the task or shifted topics significantly.
Quick reference sessions. "What did we decide about mobile support?" "When did we last change the DoR?" These leverage your persistent context for fast lookups. Short, focused, often one or two exchanges. The agent reads your CLAUDE.md and vault notes to answer grounded questions — no prompt engineering needed.
Fresh sessions. New topic, no carryover needed, or you notice the agent producing lower-quality output (context pollution from too many prior topics). Use /clear to start clean. The cost isn't tokens — it's context quality. Stale session context degrades output the same way stale vault notes do.
The key judgment: when to continue vs when to start fresh. If the next task builds on the current context, continue. If it's a different domain or you've been in the session for hours across many topics, start fresh. When you notice degradation — the agent repeating itself, missing things it caught earlier, producing generic responses — that's context pollution. Clear and restart. Claude Code also runs inside VS Code or your terminal — choose whichever fits your working style. The token cost per session varies by model and context size, so use /cost to monitor spend and /compact to summarize long conversations before they bloat.
These are the basics. A full taxonomy of session types — including batch sessions, parallel sessions, and session handoffs between team members — is a deeper topic that extends these foundations. For now, the three patterns cover your first weeks of daily use.
Close the Gap Today
You've built the knowledge system — a structured vault with typed notes the agent can traverse. You've understood the principles that make context engineering work. You've made the cognitive shift from prompt-first to context-first. And now you've done it — your first real workflow where the mindset became muscle memory.
Can you run a Claude Code workflow end-to-end with your CLAUDE.md context — and explain why the output was better than it would have been without that context? If the answer is yes, you've closed the gap between understanding context engineering and practicing it daily.
Here's what to do this week:
- Install Claude Code. Five minutes. Follow Anthropic's install guide.
- Write your CLAUDE.md. Fifteen minutes. Three categories: product context, standards, vault pointers. Fifteen to thirty lines. Not comprehensive — focused.
- Run one real task. Pick a PO task from this sprint. Not a tutorial exercise — something you'd actually do. Refinement, brief writing, scope review.
- Compare the output. Run the same task in Claude.ai without your context. Notice the difference. That's your compound return.
You've got your first workflow running. The agent knows your context — your product, your standards, your decisions. But knowing your context isn't the same as thinking like you. When the agent makes a recommendation, it draws on your knowledge. It doesn't yet draw on your judgment — the axioms, principles, and rules that guide how you make decisions. Next: how to codify your decision framework so the agent doesn't just know what you know — it decides the way you would.
Get the CLAUDE.md Starter Template + PO Agent Blueprint
Enter your email and we'll send you the download link.
Related Reading
Context Engineering in Practice: How the Shift from Prompting Actually Happens
The practitioner transformation from prompt-first to context-first AI work — with real before/after artifacts, four progressive stages, and a discrimination test for knowing when each approach is right.
The Axiom-Principle-Rule Framework: Codifying How You Think for AI Agents
A composable three-tier decision hierarchy — axiom, principle, rule — that codifies your judgment into structured context AI agents can traverse. Distilled from real PO practice, not academic theory.
Ready to accelerate?
Book a strategy call to discuss how these patterns apply to your product team.
Book a Strategy Call