A practical implementation guide for any organization, any size

Why This Playbook Exists

Most AI strategies fail at the foundation. Organizations rush to select tools before clarifying what they know, where decisions happen, or what they’re trying to improve. The result is expensive pilots that solve the wrong problems or automate broken processes.

This playbook reorders the work. It starts with the questions that determine whether AI can help you at all: what facts your organization depends on, where those facts live, and how you keep them accurate.

If those questions feel basic, that’s the point. AI requires reliable information to work with. Most organizations discover they don’t have infrastructure to maintain authoritative facts. Building that infrastructure isn’t a delay in your AI strategy. It is the first step.

This guide walks you through the sequence: from getting your information straight to measuring what changed. Each step builds on the last. You can move quickly through steps where you’re already strong, or spend time where you need it.

You don’t need expensive consulting firms. You need common sense and this sequence.

Two Kinds of Facts

Every organization maintains two distinct fact sets:

Internal facts

Information your staff rely on to run operations: schedules, programs, policies, workflows, allocations.

External facts

Information the public and AI systems rely on to find, interpret, and understand you: who you are, where you are, what you offer, when things happen, what’s authoritative.

Internal facts support your operations.

External facts determine whether you exist in the machine-mediated environment.

Most organizations maintain internal facts reasonably well because staff feel the pain when they’re wrong. Almost none maintain external authoritative metadata because no one feels the pain until discoverability collapses.

Few maintain external authoritative metadata.

This playbook exposes the difference as you move through the steps.

How to Use This Playbook

Read the steps in order. Each step states what to do in plain language.

Expand where you need detail. Each step includes why it matters, common failures, and how to know you’re done.

Don’t skip steps. The sequence reflects real dependencies. Jumping ahead creates problems you’ll fix later at higher cost.

Expect Step 1 to take time. Most organizations spend weeks or months getting their facts straight. That’s normal. It’s infrastructure work, and it’s part of your AI strategy.

The Steps

1. Get your facts straight

What to do:

Create a simple table or spreadsheet. List what your organization knows, needs to know, and wants to know: who, what, where, when. Include internal facts your staff rely on and external facts the public and AI systems rely on to find and understand you. Identify what’s missing and how to gather it..

Why this matters:

AI operates on information. If you don’t know what information you have, where it lives, or how accurate it is, AI has nothing reliable to work with. This applies to internal operations and to external discoverability.

Most organizations discover they maintain internal facts but have no reliable system for external authoritative metadata. That gap determines whether AI can perceive them at all.

Common failure:

Assuming internal knowledge (“we know our schedule,” “our team understands our programs”) means external machine-readable truth exists. Often it doesn’t. AI cannot infer what you have not declared.

What this reveals:

Missing information exposes your infrastructure gap. Your first AI task is building the systems that capture and maintain authoritative facts — both internal and external.

How to know you’re done:

You have a written inventory of:

  • Internal operational facts
  • External public-facing facts
  • Facts you need but don’t have
  • Facts you want for future decisions
  • Where each fact actually lives

2. Put facts in one authoritative place

What to do:

Choose one system where the most accurate version of each fact lives. This system has two surfaces: an internal surface your staff relies on, and an external surface the public and AI systems rely on. Everyone uses it. Everyone updates it. No private shadow lists.

In practice, this means your internal operational system (scheduling, program management, CRM) becomes the authoritative source, and it generates the structured external data AI systems can consume.

Why this matters:

AI needs a single source of truth. If internal and external versions diverge — different hours, locations, offerings — AI will pick the wrong one or none at all.

Most organizations have an authoritative internal source. Almost none have an authoritative external one. This is why they disappear from AI-mediated discovery.

Common failure:

Declaring a system “authoritative” while staff keep their workarounds. Assuming a website is an authoritative external source. If public-facing facts aren’t structured, current, and machine-readable, AI can’t use them.

What this is:

Infrastructure. Governance and workflow, not just software.

How to know you’re done:

  • Staff check the internal authoritative system first
  • External facts update when internal facts change
  • The public — and AI systems — encounter the same truth your staff rely on

3. Map where decisions actually happen

What to do:

Look at your daily work. Identify where people rely on facts to make decisions: approvals, scheduling, allocation, service delivery, communications.

Why this matters:

AI’s value shows up in workflows, not in data stores. If you don’t know where decisions happen and what information they depend on, you can’t target where AI might help.

Common failure:

Mapping official processes instead of actual workflows. AI deployed into fictional workflows produces fictional value.

How to know you’re done:

You have a list of recurring decision points that depend on reliable information and produce outcomes you can measure.

4. Decide what you want to improve

What to do:

Be concrete. Faster responses? Fewer errors? Shorter turnaround? Less time on routine decisions? Name what success looks like.

Why this matters:

Without clear goals, tools shape your work instead of supporting it.

How to know you’re done:

You can complete this sentence:

“We’ll know this worked if [specific outcome] improves by [amount] within [timeframe].”

5. Pick one workflow to improve

What to do:

Choose one workflow tied to your goals. Pick something you understand well that happens regularly and matters.

Why this matters:

Trying to do everything at once guarantees failure everywhere at once.

How to know you’re done:

Everyone involved can describe the workflow, its pain points, and why improving it is worth the effort.

6. Fix the workflow first

What to do:

Clean the workflow before adding AI. Remove unnecessary steps. Clarify roles. Fix broken handoffs.

Why this matters:

AI automates the workflow you give it. If the workflow is broken, AI makes you fail faster.

How to know you’re done:

The workflow makes sense to someone outside your team. It works reliably without AI — just slower or more manual than you want.

7. Mark where human judgment stays

What to do:

Decide which decisions AI can automate, support, or stay out of entirely.

Why this matters:

Not every decision should be automated. Draw boundaries before deployment, not after harm.

How to know you’re done:

You have a clear boundary: AI automates X, flags Y, never touches Z.

8. Research and select AI tools

What to do:

Now evaluate tools that fit your workflow, your goals, your boundaries, and your authoritative facts.

Why this matters:

Tools chosen too early distort your process. Tools chosen now support it.

How to know you’re done:

You’ve selected a tool that fits your workflow and respects your constraints. You know how you will measure success.

9. Start small: pilot the workflow

What to do:

Deploy AI in the single workflow you cleaned, using the boundaries you set. Measure the outcome you named.

Why this matters:

Small pilots teach you what works. Large untested deployments teach you what breaks.

How to know you’re done:

You have evidence about improvement, trust, edge cases, and maintenance load.

10. Measure what changed

What to do:

Compare pilot results to your goal. Did it work? By how much?

Why this matters:

Evidence tells you whether to expand, adjust, or stop.

How to know you’re done:

You can answer: Did this improve what we said we’d improve? Should we continue? What did we learn?

11. Keep your facts clean over time

What to do:

Design your system so authoritative facts stay accurate: clear update pathways, change-propagation rules, and validation checks that prevent drift.

Then assign responsibility for maintaining both internal and external facts.

Why this matters:

AI is only as good as the facts it uses. If your facts decay, your AI’s reliability decays with them.

How to know you’re done:

Maintenance is owned, scheduled, and trusted. Your authoritative systems stay reliable enough for real decisions — inside and outside your organization.

What You’ve Built

If you’ve worked through these steps, you haven’t just implemented AI.

You’ve built knowledge infrastructure.

You know what your organization knows.

You have internal and external authoritative systems.

You’ve cleaned workflows that matter.

You’ve deployed tools that improve measurable outcomes.

And you have governance to keep it working.

Most organizations skip the foundation and wonder why AI keeps failing.

You didn’t. That’s the difference.

Common Sense AI Playbook — Version 1.1

This version integrates external metadata infrastructure into the core steps.

Future versions will expand examples and implementation patterns.