Completions

How-to

How to write a master record your AI agents can actually read (and the operations team can actually maintain)

Designate one canonical system, define the schema, and write the operational discipline that keeps it accurate.

Mind-blow

Most operators have facts about their locations scattered across 4-7 systems, none canonical. The fix is designating ONE existing system as the master record — not buying a new tool.

Implementation time
240480 min
Anchor keyword
master record multi-location

What you need

  • An inventory of every existing system that holds location facts (FMP, CMS, GBP, payment processor, email platform, operations spreadsheet)
  • Operations team buy-in for ongoing event-driven master record updates
  • A franchise management platform (FranConnect/FranchiseSoft/Marketing 360) OR a structured spreadsheet for smaller operators
  • API or queryable export from the chosen canonical system

Most multi-location operators do not have a master record. They have a master CMS, a franchise management platform, a Google Sheet that the operations team updates, a Yext account that the marketing team updates, a payment processor that has different addresses for some locations, and a Salesforce instance that holds different services for those same locations. The "facts about our locations" are scattered across 4-7 systems, none of them canonical, none of them queryable by an AI agent without a custom integration per system.

This is the data-architecture problem every AI marketing pipeline runs into within the first 30 days of production. Without a canonical master record, AI agents reference stale facts, drift toward generic-internet-knowledge filler, and produce outputs that the operations team has to chase across surfaces to correct. The fix is not "buy a master data management tool." The fix is designating ONE existing system as the canonical master record, defining the schema agents will read, and writing the operational discipline that keeps it accurate.

What a master record actually is

The master record is the canonical source of truth for facts about each location. It is layer 1 of the four-layer context architecture every AI marketing swarm runs on (the other three layers: local context, brand standards, event stream). Every AI agent reads from it; only humans write to it.

Properties: the owner is the franchisor's operations team (NOT the marketing team, NOT the agents). Storage is an existing system the operations team already uses (FranConnect, FranchiseSoft, Marketing 360 admin, Salesforce, or a structured spreadsheet for smaller operators) — the master record lives where operations already lives. Refresh cadence is event-driven (operations team updates when facts change). Read access for all agents. Write access for operations team only — agents can PROPOSE changes (e.g., the citation agent flagging a NAP discrepancy) but humans approve.

The master record is not new infrastructure. It is naming WHICH of the existing 4-7 systems is canonical and rewiring the others to follow it.

Required schema

The master record's schema covers everything an AI agent needs to draft on-brand, location-specific content without making up facts. Required fields and why agents need them:

  • location_id — primary key for every other layer's join.
  • address (full, parsed into street + city + state + zip) — NAP-canonical; jurisdiction inference; local-context join key.
  • phone — NAP-canonical.
  • email — NAP-canonical.
  • hours_by_day (Mon-Sun + holidays) — GBP attribute; review-response context ("we were closed when you tried to call").
  • services_offered — page generator content; GBP services attribute.
  • payment_methods_accepted — GBP attribute.
  • manager (name, role, optional bio) — review-response personalization.
  • accessibility_features — GBP attribute; ADA-compliant content.
  • languages_spoken — GBP attribute; multi-language content gating.
  • governance_settings (per-location franchisee-control level) — editorial governance routing.
  • category — compliance overlay loading.
  • jurisdiction (state + local where local rules apply) — compliance overlay loading; tax-applicable-claims gating.
  • opening_date — local-content authoring ("celebrating 5 years at this location").
  • last_updated_at — stale-data flagging; agents emit telemetry warnings on stale reads.
  • last_updated_by — audit trail for who changed what.

Optional but high-value fields (depending on category): service_specialties (per location — some locations specialize in subset of brand services); equipment_inventory (relevant for healthcare, gym, auto); certifications_held_per_location (relevant for regulated verticals); staff_count_or_ratio (relevant for childcare, healthcare); vendor_partnerships_per_location (some franchisees have local vendor relationships brand wants to surface).

Skip fields that no agent will read. Schema bloat increases maintenance cost and produces no agent benefit.

The 6-step authoring process

Step 1: Inventory existing data sources (2-4 hours)

List every system that currently holds location facts. Typical inventory at a 100-location franchisor: franchise management platform (FranConnect/FranchiseSoft) — usually has address, phone, manager, opening_date, governance settings; marketing CMS (WordPress with location pages) — has services, hours, category; GBP listings — has hours, photos, services, accessibility; payment processor — has address (sometimes different from FMP); email platform (Constant Contact/Mailchimp) — has location-specific contact info; operations spreadsheet (the source-of-truth that operations team actually updates daily).

Map who owns each system + how often each is updated + which fields it claims canonical authority over.

Step 2: Pick the canonical system (2-3 hours, with operations team)

Designate ONE system as the master record. Selection criteria: the operations team already updates it (lower migration friction); it can hold the full required schema (no field gaps); it exposes an API or queryable export (agents can read from it); it has audit trail (last_updated_at + last_updated_by).

Most often: the franchise management platform (FMP) wins. Sometimes: a structured spreadsheet (for smaller operators) until they outgrow it. Rarely: a custom database (only when no existing system fits, which is unusual).

The decision is operational, not technical. The operations team has to be willing to update this system as facts change. If they aren't, the master record will go stale and every agent's output will drift.

Step 3: Define the migration (4-8 hours, operations + marketing alignment)

For every field in the required schema, decide: where does this currently live, where is it canonical going forward, what's the migration path? Example mappings: address currently lives in FMP + GBP + payment processor (which may differ); future canonical source is FMP; migration is to audit + reconcile differences, FMP becomes source of truth, GBP + payment processor get sync from FMP. Hours_by_day currently lives in GBP + WordPress; future canonical is FMP; migration is to add hours fields to FMP if not present, migrate, GBP + WordPress sync from FMP. Manager currently lives in operations spreadsheet; future canonical is FMP; migration is to add manager field, migrate from spreadsheet.

The migration table is the operational artifact the operations team executes against.

Step 4: Wire the agent read interface (4-8 hours)

Agents need to read the master record without each agent having a different integration. Two patterns:

Pattern A — Direct API reads. Each agent calls the FMP's API per request. Simple but slow at high volume; easy to swap FMP later.

Pattern B — Read-through cache. A background job reads the FMP every N minutes and writes to a fast cache (Redis, Postgres, or even a cached JSON blob per location). Agents read from the cache. Faster; one source for all agents; cache invalidation requires master_record_updated event from FMP webhook.

Pattern B is the right answer at any meaningful scale (50+ locations × 5 agents × 10 reads/agent/day = 2,500 reads/day directly hitting FMP API is borderline; 500 locations × 5 agents × 10 reads = 25,000 reads/day exceeds most FMP rate limits).

The master_record_updated event (per Piece 1 round 9 event taxonomy) fires on any write; cache invalidates the affected location's record; downstream agents subscribe to the event for any regeneration logic.

Step 5: Define the write protocol (2-4 hours)

The write protocol is what keeps the master record canonical. Three rules:

  1. Operations team updates the master record FIRST. Any other system update (GBP, WordPress, payment processor) follows from the master record, not the other way around.
  2. Agents PROPOSE writes; humans APPROVE. When the citation agent flags a NAP discrepancy on a directory, it does NOT silently update the master record. It opens a nap_change_proposed event; an operations reviewer evaluates + approves + writes to the master record.
  3. Audit trail captures everything. Every write includes last_updated_by (which human or which approved agent) + last_updated_at. Every change is reversible.

Step 6: Deploy the staleness flag (1-2 hours, then ongoing)

Every agent that reads from the master record checks last_updated_at. Stale data (per-field staleness threshold; e.g., hours older than 90 days probably wrong; phone older than 365 days probably wrong) triggers a telemetry warning. The operations team gets a weekly digest of stale fields per location.

This is the discipline that prevents the master record from drifting into "we have one but it's mostly wrong."

Migration patterns by current state

State A: "We have FranConnect (or similar) but our data is incomplete"

Most common state. The fix: audit every location for missing fields; operations team backfills over 2-4 weeks; lock the schema; deploy reads.

State B: "We have spreadsheets, no FMP"

Smaller operators. The fix: structure the spreadsheet against the required schema; deploy reads from the spreadsheet via Google Sheets API or Airtable; plan FMP migration when location count exceeds ~50.

State C: "We have a custom database the engineering team built"

Rare; usually well-instrumented already. The fix: confirm the schema covers everything required; deploy agent reads; document the API for agent integrations.

State D: "Different teams own different fields in different systems"

Worst state. The fix is organizational, not technical: pick one team to own the master record; reassign field ownership; migrate data; THEN deploy. Trying to deploy AI agents on top of fragmented data ownership produces drift faster than humans can chase it.

Validation

Three signals to monitor weekly for the first 60 days:

  1. Stale-field rate per location. Should be <5% of locations × fields. Higher means the operations team's update discipline isn't holding; address with operations leadership.
  2. Agent staleness-warning rate. How often do agents emit "stale data" telemetry? Should be <10% of reads. Higher means the staleness threshold needs adjustment OR the operations team is behind.
  3. Manual override rate on agent-proposed writes. When the citation agent proposes a NAP change, how often does the human reviewer override? Should be <20% — higher means the agent's proposal logic is mis-tuned.

What the master record does NOT do

  • Does not replace the franchisor's existing systems. The FMP, GBP, WordPress, payment processor, etc. all stay. The master record is whichever one is canonical; the others sync from it.
  • Does not auto-update from agent output. Agents propose; humans approve. Auto-write would compromise the canonical claim and produce drift.
  • Does not hold per-location editorial content. Master record = facts. Editorial content (page copy, GBP posts, review responses) lives wherever those surfaces live; the master record provides the FACTS the editorial content references.
  • Does not solve organizational data ownership. If multiple teams claim different fields, designating a master record exposes the conflict. The conflict must be resolved organizationally before the technical setup matters.

Cost expectations

For a 100-location franchisor: initial migration (audit + reconcile + backfill) is 40-80 hours of operations team time over 2-4 weeks; ongoing maintenance is 2-4 hours/week of operations team time (event-driven updates as facts change); technical wiring (read-through cache + agent integrations) is 8-16 hours of engineering time, one-time. Total first-year: ~150-300 hours of operations + ~16 hours engineering = $15-30k loaded cost.

Compare to the cost of NOT having a master record: AI agents producing drifted content + operations team chasing corrections across surfaces + brand drift compounding + reputational damage. The master record is one of the highest-ROI infrastructure investments in the entire AI marketing stack.

What this gets you

A canonical source of truth that every AI agent reads from. A migration pattern that builds on existing systems instead of replacing them. A write protocol that keeps humans in the loop on changes that affect every downstream surface. A staleness flag that surfaces drift before it compounds into agent output errors. An audit trail that answers "why does our Cherry Creek location's hours show as 9-5 when the storefront is open until 7?" with a clear last-updated-by + last-updated-at.

The master record is layer 1 of the four-layer context architecture; the other three layers (local context, brand standards, event stream) all reference it. Every other spoke in the franchise local SEO orchestration architecture depends on the master record being canonical and accurate. Without it, the brand-voice gate scores outputs against the wrong facts; the compliance overlay loads the wrong jurisdiction; the editorial governance routes to the wrong reviewer; the page generator drifts toward generic-internet-knowledge filler.

Or have us deploy this for you

We'll deploy Per-Location Page Generator in 3 weeks for $6,500–$9,500 — with a 30-day operating tail and full handoff. You own every artifact: the prompts, the configs, the audit log, the wrapper code.