Stretch Goal · 25 min

Add an MCP Prompt Template

Make a "draft SAR narrative" workflow available as a one-click slash command.

⏱ 25 minutes 💬 Reusable prompts 🔗 Built on the main tutorial
Prerequisite

You've completed the main MCP build tutorial. Doing 04a (resources) first is recommended — this builds on it.

Why prompts deserve to be a protocol primitive

"Prompt" in MCP doesn't mean what it usually means. It's not just a string. It's a parameterized, server-published, slash-command-discoverable template that bundles instructions, examples, and references into one reusable unit the user can invoke.

Why this is a big deal for compliance:

  • Prompts become artifacts. They live in the server, are version-controlled, and can be reviewed like code.
  • Policy lives outside the code. Your "how to draft a SAR" guidance is a prompt definition, not a string buried in an app.
  • Reuse and consistency. Every analyst gets the same prompt, with the same examples, every time.
  • Audit trail. Logs capture "prompt name + version + arguments" rather than "some string the user typed."
Mental model

If a tool is a function call, and a resource is a file, then a prompt is a command-line script with named arguments. The server publishes it; the host shows it as a slash command; the user invokes it; the model receives the rendered prompt as if the user had typed the whole thing.

1Add the prompt definition~10 min

Open your server.py and add this block after any resources but before if __name__ == "__main__"::

server.py (additions)
from mcp.server.fastmcp.prompts import base

# ---------- Prompts: reusable templates ----------

@mcp.prompt()
def draft_sar_narrative(
    case_id: str,
    suspicious_activity_type: str,
    time_window: str = "the last 30 days",
) -> list[base.Message]:
    """Draft a Suspicious Activity Report (SAR) narrative for a case.

    The narrative will follow the firm's standard SAR template (WHO/WHAT/WHERE/WHEN/WHY),
    cite specific transactions from the case file, and reference the policy URIs used.
    """
    return [
        base.UserMessage(
            f"You are a senior compliance analyst drafting a SAR narrative. "
            f"Use ONLY information from the case file and the policy documents you have "
            f"been provided. Cite specific transactions by date and amount. "
            f"Cite policy URIs for any procedural claim. Refuse to speculate.\n\n"
            f"Case: {case_id}\n"
            f"Suspicious activity type: {suspicious_activity_type}\n"
            f"Time window: {time_window}\n\n"
            f"Step 1: Fetch the case file at compliance://cases/{case_id}.\n"
            f"Step 2: Fetch the SAR template at compliance://policies/sar-narrative-template.\n"
            f"Step 3: Draft a 5-15 sentence narrative following the template structure.\n"
            f"Step 4: List the specific URIs you cited.\n\n"
            f"Do not file the SAR. Produce a draft for human review only."
        ),
    ]


@mcp.prompt()
def explain_alert(
    alert_id: str,
    counterparty_name: str,
    counterparty_country: str,
    amount_usd: float,
) -> str:
    """Explain a transaction alert in plain language for a non-technical investigator.

    Returns a single rendered string the model uses as its user-turn input.
    """
    return (
        f"I'm a new investigator. Please explain alert {alert_id} in plain language. "
        f"The alert involves a {amount_usd:,.2f} USD transaction with counterparty "
        f"'{counterparty_name}' in {counterparty_country}.\n\n"
        f"Walk me through:\n"
        f"  1. What screening tools would you run on this?\n"
        f"  2. What the result of each tool tells you in plain English.\n"
        f"  3. What the recommended action is, and why.\n"
        f"  4. What I should ask the customer if I need more information.\n\n"
        f"Use the compliance-toolkit tools to run the screening for me as you explain."
    )
Two return shapes

draft_sar_narrative returns a list of base.UserMessage objects — useful when you want to construct multi-turn conversations or include images. explain_alert returns a plain string, which FastMCP wraps into a single user message. Use whichever fits.

The pattern that matters

Notice how the SAR narrative prompt tells the agent which tools and resources to use. It's chaining: prompt → reach for resources (case file, template) → reach for tools (screening) → produce structured output. The prompt is the choreography. You're shipping a workflow as an artifact, not a string.

2Test with the inspector~3 min

shell
mcp dev server.py

In the inspector, switch to the Prompts tab. You'll see both prompts with their parameter schemas. Invoke draft_sar_narrative with:

draft_sar_narrative args
{
  "case_id": "CASE-2026-00012",
  "suspicious_activity_type": "structuring (sub-CTR-threshold splitting)",
  "time_window": "April 18-21, 2026"
}

You'll see the rendered prompt — the full string with your arguments substituted in. This is what the model receives. Confirming the prompt renders correctly is your sanity check before letting Claude run it.

3Use as a slash command in Claude~5 min

Restart Claude Desktop. In a new chat, type / in the message box. You should see your prompts appear in the slash-command picker as /draft_sar_narrative and /explain_alert.

Pick /draft_sar_narrative. Claude Desktop will prompt you for the arguments. Fill them in:

  • case_id: CASE-2026-00012
  • suspicious_activity_type: structuring
  • time_window: April 18-21, 2026

Send. Watch Claude execute the workflow: it fetches the case file resource, fetches the SAR template resource, runs sanctions/jurisdiction screening tools as warranted, and produces a structured narrative with explicit citations.

What you've built

You've shipped a one-click compliance workflow. An analyst who has never seen the system can type /draft_sar_narrative, fill in three fields, and get a structured, cited, audit-ready draft. The prompt encodes the policy. The resources provide the data. The tools do the work. The model orchestrates. That is what "AI agent for compliance" looks like in practice.

4Add a second, more advanced prompt~7 min

Let's add an evaluator prompt — one that reviews an AI-generated draft against compliance criteria. This is the "evaluator-optimizer" pattern from Anthropic's Building Effective Agents essay, shipped as an MCP prompt.

server.py
@mcp.prompt()
def review_sar_draft(draft_text: str) -> str:
    """Critique a draft SAR narrative against the firm's quality rubric.

    Returns a structured review pointing out missing sections, speculative language,
    uncited claims, and tone issues. Does NOT rewrite — the human decides what to fix.
    """
    return (
        f"You are a compliance lead reviewing a junior analyst's draft SAR narrative. "
        f"Apply the firm's rubric strictly. Do NOT rewrite the draft — produce a "
        f"structured critique that the analyst can act on.\n\n"
        f"--- DRAFT ---\n{draft_text}\n--- END DRAFT ---\n\n"
        f"Score each of the five criteria on a 1-5 scale and explain:\n"
        f"  1. WHO/WHAT/WHERE/WHEN/WHY structure — are all five present?\n"
        f"  2. Factuality — every claim backed by case data, no speculation?\n"
        f"  3. Citation — specific transactions cited by date and amount?\n"
        f"  4. Tone — factual, non-conclusory, regulator-appropriate?\n"
        f"  5. Length — 5-15 sentences, specific not comprehensive?\n\n"
        f"Then produce a 'top 3 things to fix' list, ordered by importance.\n"
        f"Conclude with a single line: APPROVED | REVISE | REJECT."
    )

Restart, then in Claude Desktop:

  1. Run /draft_sar_narrative to generate a draft.
  2. Copy the resulting narrative.
  3. Run /review_sar_draft and paste the draft as the argument.
  4. Read the critique. Notice it doesn't rewrite — it points out gaps.

You've now shipped a two-step workflow (draft → review) as two slash commands. Add a third (finalize_sar that takes draft + approved critique → final version) and you have the evaluator-optimizer loop fully wired.

What you can now say in the interview

Sample answer — ~60 seconds

"Beyond tools and resources, I added prompts as a server-side primitive. Prompts in MCP are parameterized, server-published templates that the host surfaces as slash commands — the user invokes them with structured arguments, and the rendered prompt becomes the model's user turn. For compliance that matters more than it sounds. Prompts let you ship policy as a versioned artifact: 'how we draft a SAR' lives in the server, gets code-reviewed, and produces consistent output across analysts. I wired up draft-and-review as two prompts chained together, which is just the evaluator-optimizer agent pattern materialized as a workflow non-engineers can invoke. The audit log captures 'prompt name + version + arguments' rather than 'some string the user typed,' which is a much cleaner story for a regulator."