Practice Problems — SQL & PM Cases
Ten problems mixing SQL, metrics design, platform spec, experiment design, sizing, and policy. Each one has a worked solution hidden by default. Read the prompt, talk through your version on a timer, then reveal to compare.
How to drill
- Read the prompt. Set a 10-15 minute timer.
- Talk through your approach aloud — population, primitive, edge case, metric.
- Write the answer (SQL on paper or a notebook; structured outline for cases).
- Reveal. Compare shape, not exact words. Mark "practiced" when you've internalized.
P1 · Write the SQL for D7 activation by source
Prompt. Given Talking points: exclude the last 7 days of cohorts because they haven't had a chance to convert; LEFT JOIN so non-converters stay in the denominator; NULLIF guards against divide-by-zero in tiny segments.onboarding.applicants(applicant_id, created_at, acquisition_source, region) and onboarding.events(applicant_id, event_type, occurred_at) where event_type='first_deposit' marks activation, write SQL for D7 activation rate by acquisition source for cohorts that started between 2026-01-01 and yesterday. Be careful about young cohorts.
Show worked solution
WITH cohort AS (
SELECT applicant_id, DATE(created_at) AS started_on, acquisition_source
FROM onboarding.applicants
WHERE created_at >= '2026-01-01'
AND created_at < CURRENT_DATE - INTERVAL '7 days' -- exclude young
),
first_act AS (
SELECT applicant_id, MIN(occurred_at) AS first_deposit_at
FROM onboarding.events
WHERE event_type = 'first_deposit'
GROUP BY applicant_id
)
SELECT
c.acquisition_source,
COUNT(*) AS cohort_n,
COUNT(CASE WHEN f.first_deposit_at <= c.started_on + INTERVAL '7 days'
THEN 1 END) AS activated_7d,
ROUND(100.0 * COUNT(CASE WHEN f.first_deposit_at <= c.started_on + INTERVAL '7 days'
THEN 1 END) / NULLIF(COUNT(*), 0), 2) AS d7_pct
FROM cohort c
LEFT JOIN first_act f USING (applicant_id)
GROUP BY 1
ORDER BY cohort_n DESC;
P2 · Design a North Star for the onboarding platform
Prompt. Propose a north-star metric for the onboarding platform. Defend it, then critique it, and name two paired metrics and two guardrails.
Proposal: "Median time-to-launch a new onboarding configuration (new market, new tier, or new vendor) measured from kickoff to first-verified-customer." Defense: Directly measures the team-as-customer outcome the platform exists to enable; trends are interpretable; product-launch dependency makes it visible to leadership. Critique: Not directly tied to revenue; can be gamed by trivial launches; doesn't capture quality of the resulting flow. Paired: (a) overall verified-customer activation rate by segment/region; (b) cost per successful verification. Guardrails: (a) sanctions hit rate must not decline; (b) customer complaint volume must not rise; (c) audit-log completeness stays 100%.Show worked solution
P3 · Spec a vendor-abstraction layer
Prompt. In 90 seconds, sketch the spec for an IDV vendor abstraction layer. Cover: interface shape, what's normalized, what's vendor-specific, audit posture, and rollout.
Show worked solution
create_session, get_decision, fetch_evidence, cancel.signals dict for audit; capability flags (e.g. supports_age_estimation) so downstream can opt into vendor-unique features additively.
P4 · Design an experiment for a new IDV vendor
Prompt. You're evaluating vendor B against incumbent A for German consumer KYC. Design the experiment.
Show worked solution
P5 · Size the impact of adding Brazilian KYC
Prompt. Leadership asks: what's the impact of launching Brazil? Walk through a top-down + bottom-up estimate of incremental verified customers and revenue in year 1.
Top-down: Brazilian crypto-curious market ~ X million (publicly cited estimates from Chainalysis, Statista — flag the source). Realistic year-1 share for a foreign exchange entering: 0.5-2%. Net new sign-ups: range. Bottom-up: From comparable market launches (a similar-sized LATAM country): N sign-ups in month-1, decaying to steady-state N/month. Annualize. Funnel application: apply our consumer funnel rates (start → verified → activated) — usually new markets convert below baseline initially. Discount by 20-30% the first quarter. Revenue: activated customers × ARPU for that segment × retention curve. Costs: per-verification cost × verifications; local-language support; compliance setup amortized. What I'd flag: ranges, not point estimates. The big risk isn't the model — it's the regulatory timeline (license to operate).Show worked solution
P6 · Spec a feature-freeze policy
Prompt. The platform has had three regressions in two months from late-week deploys. Spec a feature-freeze policy that improves stability without making the team feel jailed.
Show worked solution
P7 · Drop-off SQL forensic
Prompt. Activation in Germany dropped 8pp w/w. Write the SQL you'd run to find what changed.
Then slice by: vendor (did we route to a different one?), document type, app version, time-of-day (is it traffic mix?), referral source. Most "activation drops" trace to a single explanatory dimension.Show worked solution
SELECT
DATE_TRUNC('week', a.created_at) AS cohort_week,
COUNT(*) AS started,
COUNT(CASE WHEN e1.event_type='id_submitted' THEN 1 END) AS submitted,
COUNT(CASE WHEN e2.event_type='verified' THEN 1 END) AS verified,
COUNT(CASE WHEN e3.event_type='approved' THEN 1 END) AS approved,
COUNT(CASE WHEN e4.event_type='first_deposit' THEN 1 END) AS activated
FROM onboarding.applicants a
LEFT JOIN onboarding.events e1 ON e1.applicant_id=a.applicant_id AND e1.event_type='id_submitted'
LEFT JOIN onboarding.events e2 ON e2.applicant_id=a.applicant_id AND e2.event_type='verified'
LEFT JOIN onboarding.events e3 ON e3.applicant_id=a.applicant_id AND e3.event_type='approved'
LEFT JOIN onboarding.events e4 ON e4.applicant_id=a.applicant_id AND e4.event_type='first_deposit'
WHERE a.region = 'DE'
AND a.created_at >= CURRENT_DATE - INTERVAL '60 days'
GROUP BY 1
ORDER BY 1;
P8 · Tier-upgrade migration plan
Prompt. Compliance has decided that the tier-2 definition will now require source-of-funds for all customers. You have 1.2M existing tier-2 customers. Plan the migration.
Show worked solution
P9 · Sanctions hit dispositioning queue
Prompt. Spec a queue and console for ops to disposition sanctions hits. What's the data model, what's the UX, what's the SLA?
Data model: UX must surface: side-by-side of applicant vs list entry (name, DOB, nationality); match-score and which fields matched; prior dispositions for same entity; one-click outcomes (true-match → escalate, no-match → close with rationale). SLA: high-confidence hits (score > 0.9) reviewed within 4 hours; lower within 24. Customer account state during pending: soft-blocked from new transactions, communicated via banner. Audit: every disposition logged with reviewer identity and rationale. Re-screening hits on previously-cleared customers surface the prior disposition. Quality controls: sample audit of cleared hits (10% review by a second reviewer), feedback loop into the match-score calibration.Show worked solution
screening_hit(hit_id, applicant_id, list, list_entity, match_score, fields_matched, detected_at, disposition, dispositioned_by, dispositioned_at, rationale, attachments).
P10 · The 90-day plan
Prompt. You start Monday. What's your 90-day plan?
Days 1-30 — Learn. 1:1s with: VP-Eng, head of Compliance, head of Ops, three consumer-team PMs (Pro, Futures, Institutional), one operations lead, one customer-support lead. Shadow two manual reviews. Read the AML risk assessment. Read recent postmortems. Map the platform's current capabilities and limits. Result: a written current-state document, shared. Days 30-60 — Diagnose. Synthesize the listening into 3-5 named gaps. Get Compliance and Eng to agree on which two are highest-leverage. Draft the platform thesis. Pre-wire with VP-Product and VP-Eng. Result: a strategy doc circulated for review. Days 60-90 — Commit. Strategy doc approved. First 90-day wave scoped: one named primitive, one named consumer-team partner, one named launch unlock. Eng plan in place. Compliance counterpart named. Result: the team is shipping, with a roadmap people can reference. What I'd avoid: announcing a vision in week 2; promising specific dates before knowing the team's capacity; making enemies with Compliance by skipping their review.Show worked solution