Section A · Orient · Read first

Start Here

Interview prep for AI engineering and solutions-architect roles applying agentic AI in compliance and regulated finance.

The role, in plain English

This guide is for engineers preparing for roles that build agentic AI inside Compliance, AML/KYC, or RegTech teams — typically at fintechs, crypto exchanges, or any firm with serious regulatory exposure. Job titles vary ("AI Engineer," "Solutions Architect — Compliance," "AI Agents Architect," "Applied AI Engineer — Risk"), but the work is recognizable:

Despite "architect" in some titles, this is fundamentally an applied AI engineering role embedded in a Compliance org. You build agentic workflows that automate real compliance work (alert pre-screening, case narratives, regulatory change summaries, audit docs) using:

  • n8n (low-code workflow automation)
  • Claude API / Anthropic SDK (the LLM)
  • Python (glue code, analytics, integrations)
  • MCP (Model Context Protocol — how agents access tools/data)

The work has unusually high regulatory stakes. Wrong AI output in compliance isn't "the model said something silly" — it can mean a missed sanctions hit, a botched SAR (Suspicious Activity Report), or a regulator asking why an AI made a decision and you can't show them. That's why interviewers focus on harnesses, evals, error handling, and audit trails — these are the load-bearing concepts that separate "demo agent" from "agent regulators will tolerate."

What the rounds typically test

Loops for these roles usually include two AI engineering rounds (deep technical), often paired with general coding (Array / Hash Table / String / Sorting / DFS, plus problems like Longest Common Prefix, Valid Anagram, Subarray Sum Equals K) and AI-tech topics (data pipelines, model deployment on AWS/GCP, RAG/LLM applied).

So expect a mix of:

  1. Conceptual AI engineering — harness, evals, MCP, error handling, audit trails.
  2. Live coding / DSA — common patterns, walked through in real time. Bring Python.
  3. Applied AI architecture — given a compliance scenario, design a system end to end.
  4. Cloud / MLOps fluency — how would you deploy this on AWS/GCP, what does the data pipeline look like.
  5. Behavioral / motivation — why this role, how you handle ambiguity, when you've pushed back.

This folder covers all five.

The folder, in reading order

The numbering follows the order you should read in. Five sections:

Section A — Orient (read first)

FileWhy
01-the-roleDecode what the role actually involves, and what the stack means
02-positioning-from-scratchMindset before content — how to interview honestly when you don't have deep production AI experience yet

Section B — AI engineering concepts (the technical core)

FileWhy
03-ai-developmentFoundation: models, prompts, tool use, structured output, prompt caching, prompt injection. Read first in this section
04-mcp-deep-diveMCP protocol — primitives (tools, resources, prompts, sampling), transports, auth, tool design
04-mcp-build-guide (hands-on)Build a working MCP server in 45 minutes. Plus four stretch tutorials.
05-harnesses-and-agents"Harness" disambiguation, agent loop, Anthropic's "Building Effective Agents" patterns, frameworks
06-rag-appliedProduction RAG — chunking, hybrid search, reranking, agentic RAG, Ragas evals
07-evalsEval frameworks, LLM-as-judge, eval-driven development, agent evals
08-error-handling-aiAI-specific failure modes, retry/repair patterns, idempotency, fail-safe defaults
09-audit-trailsEvent schemas, versioning, GDPR vs retention, model risk management

Section C — Coding (DSA)

FileWhy
10-coding-fundamentalsPattern menu, arrays, hashing, strings, sorting, DFS/BFS, two pointers, sliding window, prefix sum, Big-O
11-coding-problemsWorked solutions: Longest Common Prefix, Valid Anagram, Subarray Sum Equals K, plus 8 common companions

Section D — Production / cloud

FileWhy
12-data-pipelinesETL/ELT, batch vs streaming, Airflow/Spark/Kafka, RAG indexing pipelines
13-model-deploymentAWS (Bedrock, SageMaker, Lambda) and GCP (Vertex AI, Cloud Run) deployment patterns

Section E — Reference + execution

FileWhy
14-compliance-contextAML / KYC / SAR / EDD vocabulary so you don't sound lost
15-interview-questions~28 practice Q&As. Drill these out loud
16-day-ofTactics, traps, questions to ask them. Re-read morning of

Suggested study schedule

If you have 7+ days
  • Day 1: 01, 02 (orient), then 03 (AI dev fundamentals — the foundation everything else builds on).
  • Day 2: 04 (MCP) + 05 (Harnesses).
  • Day 3: 06 (RAG) + 07 (Evals).
  • Day 4: 08 (Error handling) + 09 (Audit trails).
  • Day 5: 10 + 11 (Coding). Solve the named problems on paper without peeking.
  • Day 6: 12 + 13 (Pipelines + deployment). Skim 14 for vocab.
  • Day 7: Drill 15 out loud. Read 16. Sleep.
If you have 2-3 days

01, 02, 03, 04, 05, 07, 08, 11 (do the three named problems), 15, 16. Skim everything else.

If you have < 24 hours

01, 02, 11 (the three named problems), 15, 16. Skim 04, 05, 07 headings only.

Two practical things to do before interview day

Reading is cheaper than building, but building sticks. If you can find an evening or two:

  1. Build a tiny MCP server. ~50 lines of Python or TypeScript. One tool that calls a public API. Run it locally against Claude Desktop. The protocol's lifecycle becomes muscle memory in an hour. Far better than rereading 04. (Use the build guide.)
  2. Build a tiny RAG demo. 10-20 documents, naive chunking, embed, top-K retrieve, generate. Then add hybrid search or reranking and watch retrieval quality change. Internalizes everything in 06-rag-applied.

These two evenings would close more of your gap than the same time spent rereading.

The single most important reframe

Many candidates feel underqualified going into senior AI engineering interviews — the field moves fast, the vocabulary is dense, and "real" production experience is unevenly distributed. Two things matter:

  1. You're learning the precise vocabulary AI engineers use. This folder fixes that.
  2. You're being honest about your gaps, not bluffing. That posture, done right, is more persuasive than fake seniority. Read 02-positioning-from-scratch first; the entire interview goes better when your inner posture is "honest, prepared, fast learner" rather than "trying to sound senior."
When you don't know something

Say so cleanly: "I haven't worked with X. My closest reference point is Y. Want me to reason about X from first principles?"

Interviewers respect that far more than bluffing.

What "winning" looks like in these rounds

You're not going to out-AI-engineer two AI engineers on their home turf. Winning is:

  • Vocabulary fluency — using the right terms in the right places.
  • Sound reasoning — given a novel problem, you arrive at a defensible architecture by thinking, not by recall.
  • Failure-mode instinct — you reach for "what could go wrong" before "what's cool."
  • Compliance-aware judgment — your default reaches for human-in-the-loop, audit logs, risk tiering, before "agent does it autonomously."
  • Honesty at the edge of what you know — and graceful redirection.
  • Live learning — when they teach you a concept mid-interview, you visibly absorb and use it later.

You're closer than you think. Let's go.