Compliance Context
Two AI engineers will interview you, but they work inside the Compliance org. You don't need depth — you need not to flinch when these terms come up.
The big acronyms
| Acronym | Meaning | What it means for AI |
|---|---|---|
| AML | Anti-Money Laundering | Detect/prevent illicit money flows. Big AI surface: alert triage, transaction monitoring, narrative drafting. |
| CFT | Counter Financing of Terrorism | Adjacent to AML. Same tooling, different threat. |
| KYC | Know Your Customer | Onboarding identity verification. Doc analysis, sanctions checks, EDD. |
| CDD / EDD | Customer Due Diligence / Enhanced Due Diligence | Tiered scrutiny. EDD = high-risk customers. AI helps draft EDD reports. |
| SAR | Suspicious Activity Report | Filed with regulators when a transaction looks suspicious. Highest-stakes AI artifact. |
| STR | Suspicious Transaction Report | Some jurisdictions' equivalent of SAR. |
| PEP | Politically Exposed Person | Higher-risk class. EDD required. |
| OFAC | US Office of Foreign Assets Control | Maintains the US sanctions list. |
| BSA | Bank Secrecy Act | Foundational US AML law. |
| FinCEN | Financial Crimes Enforcement Network | US Treasury bureau, takes SAR filings, runs FinCEN advisories. |
| FATF | Financial Action Task Force | International standard-setter for AML/CFT. |
| FCA | Financial Conduct Authority | UK financial regulator. |
| MAS | Monetary Authority of Singapore | Singapore regulator. |
| BaFin | German federal financial regulator. | |
| FinTRAC | Canadian AML regulator. | |
| MiCA | Markets in Crypto-Assets | EU crypto-specific regulation. Highly relevant to crypto exchanges. |
| MiFID II | EU markets regulation | Trading-side reporting; capital markets compliance. |
| GDPR | EU data protection regulation | Constraint on what data can flow where. |
| TM | Transaction Monitoring | The system that generates alerts. |
How Compliance work actually flows
A simplified mental model:
- Onboarding (KYC): customer signs up. Identity verified, screened against sanctions/PEP lists, risk-rated. Tier 1 / 2 / 3 customer.
- Transaction monitoring: every transaction is scored by rules + ML for anomalies. Suspicious ones generate alerts.
- Alert triage (L1): an analyst reviews each alert. ~80%+ are false positives — dismissed. The rest escalate.
- Investigation (L2): an investigator opens a case, gathers evidence, writes a case narrative, decides outcome.
- SAR filing: if the investigator decides activity is suspicious, a SAR is drafted (to specific regulator format), reviewed, filed with FinCEN/equivalent.
- Periodic review: high-risk customers get scheduled re-review (EDD refresh).
- Regulatory updates: rules change. Compliance leads read, summarize, update controls and policies.
- Audit / examination: regulators show up periodically. Compliance produces evidence of what was done and why.
Each step has AI leverage opportunities (and risks). The JD specifically calls out: alert pre-screening, case narrative generation, regulatory change summaries, EDD drafting, audit-ready documentation. Map those onto the flow above.
Key concepts and where AI fits
Alerts and false positives
Modern transaction monitoring produces a lot of alerts. Most are FPs. AI's wedge: pre-screen alerts so humans only see ones with real signal. Risk: you suppress a true positive. So pre-screening must have very high recall on must-catch cases; precision is secondary. (See 07-evals.)
Case narrative
A free-text document an investigator writes describing a case: what was alerted, what evidence gathered, what conclusion reached. AI's wedge: draft the narrative from structured case data. Investigator edits and signs. Saves time, preserves judgment.
EDD report
Detailed risk write-up on a high-risk customer. Pulls together: KYC info, transaction patterns, sanctions/PEP findings, news (adverse media), source of wealth. AI's wedge: assemble + draft. Risk: hallucinate wealth source from training data. Defense: heavy RAG with citations to authoritative records, no recall-from-training.
Sanctions screening
Match a name/entity against sanctions lists (OFAC SDN, EU consolidated list, UN, others). Hard problem because of: aliases, transliteration (Cyrillic→Latin), partial matches, ambiguous matches. ML and string-matching tools are well established here. AI's wedge: explain matches in natural language, summarize hit packages for review.
Adverse media
News searches for negative coverage of customers ("X arrested," "X under investigation"). LLMs are great at this — read articles, classify relevance, summarize. Risk: hallucinated articles or misattributed claims. Defense: cite source URL, retrieve article text fresh.
Regulatory change management
New regulation is published. Compliance has to: read it, identify what changes for the firm, propose updates, train teams. AI's wedge: ingest regulation, draft impact analysis. Compliance lead reviews. High value, low risk (it's a draft, not an action).
Audit response
Regulator asks "show me how you handled X cases over the last 6 months." Today that's painful manual work. AI's wedge: produce summary + supporting documentation from logs. Risk: misrepresent the actual evidence. Defense: cite specific log/case IDs, never paraphrase.
Risk-tiering compliance AI workloads
A useful framework for the interview. Walk through tiers when discussing design:
| Tier | Examples | Gates |
|---|---|---|
| Low | Drafting an internal regulatory summary, pre-formatting an alert for a human, RAG over policies | LLM alone, lightweight review, eval-monitored |
| Medium | Drafting a case narrative, drafting an EDD section, classifying alert severity | Human review pre-action, structured citations, full audit trail |
| High | Recommending SAR filing, freezing accounts, communicating with customers about KYC | Multiple human approvals, scoped tools, kill switch, mandatory eval, MRM sign-off |
| Forbidden | Autonomous decisions on filings, autonomous account actions, autonomous customer comms | Don't build it. Period. |
If asked "would you let the agent do X?" — risk-tier the X first, then answer.
Know-your-jurisdictions (lightweight)
You don't need to know each rule. You need to know: jurisdictions differ, and the architecture must respect that.
- US: BSA/FinCEN, OFAC sanctions, NYDFS for NY-licensed firms, state-level money transmission.
- EU: AMLD (now AMLR — the new EU regulation), MiCA for crypto, GDPR for data.
- UK: FCA, separate from EU post-Brexit, broadly aligned but distinct.
- APAC: MAS (Singapore), JFSA (Japan), each with its own framework.
What this means for AI: data residency, model deployment region, data classification. EU customer data going to a US-hosted model is a flag. Architectures that route by jurisdiction are common.
The "thresholds and lists" reality
Compliance leans heavily on lists and thresholds:
- Sanctions lists (updated frequently — your retrieval index must keep up).
- PEP lists (commercial vendors maintain these).
- Adverse media corpora.
- Internal blocked lists.
- Risk-rating thresholds ("transactions above $X to country Y").
- Periodic review schedules (Tier 3 customer = annual EDD).
When AI is involved, the lists and thresholds are inputs the agent must consult — not things it should infer. Agent calls a tool that returns "is X on list Y as of today" — never relies on its training-time knowledge.
Concepts that signal "AI-aware compliance maturity"
Drop these in conversation if relevant:
- Risk-based approach: regulators expect AI controls scaled to risk. Not all controls equal.
- Explainability requirement: in many jurisdictions (notably under the EU AI Act), high-risk uses must be explainable to affected individuals.
- Defense in depth: multiple overlapping controls. AI is one layer, not the only layer.
- Four-eyes principle: significant decisions require two reviewers. AI doesn't replace either pair of eyes — but the second eyes can review the AI's draft.
- Segregation of duties: the analyst who triages can't be the same human who approves the SAR. AI agents need similar role-based access controls.
- Materiality / proportionality: don't deploy heavyweight controls on low-stakes flows.
Crypto-exchange-specific wrinkles
If the role is at a crypto exchange (or other firm with on-chain exposure), additional patterns apply:
- Wallet screening: chain-analytics tools (Chainalysis, TRM, Elliptic) score addresses for risk (mixer exposure, sanctioned address proximity, hack proceeds). Often called via API; perfect for MCP-fronted tools.
- Travel Rule: cross-VASP transfers above thresholds carry originator/beneficiary info. AML over rails like TRP, Notabene.
- Mixers / privacy tools: certain interactions raise risk scores automatically.
- DeFi exposure: harder to attribute. New ground for compliance and AI.
- MiCA compliance: EU's specific crypto regime — comes online progressively through 2025-2026.
If asked about crypto compliance, the AI angle is: ingest chain-analytics data + transaction graph + KYC + adverse media, draft a risk write-up. The hard part isn't the AI — it's the data fusion and ground-truth labeling for evals.
Communicating with non-technical stakeholders
The JD calls this out. Practice this exact translation:
| Technical term | Plain English |
|---|---|
| "Eval" | "How we measure whether the AI is doing a good job before and during production." |
| "Hallucination" | "When the AI states something confidently that isn't actually true — like making up a case citation. We design specifically against that." |
| "Tool use" | "The AI is given a small set of approved actions — like 'look up this customer' or 'fetch the latest regulation' — and can only do those, with a record of every call." |
| "Human-in-the-loop" | "Every consequential output is reviewed and approved by a human before any external action." |
| "Audit trail" | "Every input the AI saw, every step it took, every output it produced, every human who reviewed — all of it logged immutably and retrievable." |
| "Drift" | "Quality of AI output can degrade over time as conditions change. We monitor specifically for that and alert." |
Practice saying these out loud. They'll come up.