Drill — Answers Hidden by Default

Practice Interview Questions

30 questions across 8 sections, calibrated to senior Platform-PM loops at regulated fintech / crypto exchanges. Read, set a 90-second timer, answer aloud, then reveal to compare.

30 questions ~90 sec each Progress saved
0 / 30 practiced

Drill protocol

For each question: read it, take 5 seconds to structure, deliver the answer aloud in 60-90 seconds, then reveal. The shape of a strong answer matters more than the exact words.

Section A · Platform strategy

Q1. What does "Platform PM" mean to you?

Show strong answer

A Platform PM serves internal teams as customers — other PMs, eng leads, compliance, ops. The deliverable is APIs, events, configuration primitives, and docs. Success is adoption and team-velocity, not end-user engagement. The discipline is backward compatibility, an explicit metric registry, and treating internal customers as a sale — they can fork you if you're slow.

Q2. Pitch the platform thesis for the company's onboarding rearchitecture.

Show strong answer

Today, onboarding is a set of forked flows; every new product (Futures, Margin, Pro) and every new market is from-scratch. Consolidate to three primitives — vendor abstraction, jurisdictional rule engine, configurable flow renderer. Sequence by smallest blast radius first. Drops time-to-launch from quarters to weeks, gives vendor-renewal leverage, makes audit responses a query. Migration in 90-day waves with named consumer-team unlocks.

Q3. How do you decide what belongs in the platform vs in a consumer team?

Show strong answer

Two-customer rule: generalize only when at least two consumer teams want similar capability. First team builds in their own space; second team's ask triggers extraction. Cross-cutting concerns (compliance, audit, identity-resolution) default to platform regardless of demand count, because they affect regulator-readiness.

Q4. How would you write a strategy doc that gets signed off?

Show strong answer

Pre-wire 1:1 with each senior reviewer before the doc-review meeting. Write non-goals before goals. Make migration as concrete as the target state. Include a "what if we do nothing" section. End with explicit asks — what headcount, what calendar time, what decisions you need from whom by when.

Section B · KYC / Onboarding domain

Q5. Walk me through the difference between KYC, KYB, and AML.

Show strong answer

KYC: identity verification for an individual customer. KYB: same for a business + its beneficial owners. AML: the umbrella program that uses KYC/KYB as inputs and adds transaction monitoring, sanctions screening, and suspicious-activity reporting. KYC is identity; AML is risk.

Q6. Design a tiered KYC system that supports Consumer, Pro, and Institutional segments.

Show strong answer

Tiers as ordered levels — basic / verified / enhanced — each unlocking specific capabilities. Each applicant has (tier, policy_version, last_verified_at). Tier upgrade is a step-up flow, not full redo. Grandfathering is first-class — existing customers don't auto-migrate on policy change. Different segments (Consumer, KYB, Institutional) use shared primitives (screen, verify, decide) composed into segment-specific flows.

Q7. How would you handle a vendor outage during peak onboarding?

Show strong answer

Vendor abstraction layer with a waterfall — auto-failover to a secondary vendor on threshold breach. Health monitoring drives automated routing changes; on-call gets alerted in parallel. Tail of unsupported scenarios queues for retry post-recovery, customer notified. Postmortem includes vendor SLA enforcement.

Q8. What's the Travel Rule and what does it imply for the platform?

Show strong answer

FATF Recommendation 16 — when a customer sends crypto above a threshold to another regulated VASP, the originating VASP must transmit originator and beneficiary information. Implementation requires a Travel Rule integration (TRP, OpenVASP, Notabene) at the withdrawal perimeter. For self-hosted destinations, wallet attestation and on-chain risk scoring substitute for the counterparty data exchange.

Section C · Metrics

Q9. What's your north-star metric for the onboarding platform?

Show strong answer

Primary: median time-to-launch a new onboarding configuration. Paired: verified-customer activation rate, cost per successful verification. Guardrails: sanctions hit rate must not decline, customer-complaint volume stable, audit-log completeness 100%.

Q10. Activation dropped 8pp last week in Germany. Walk me through your investigation.

Show strong answer

First, confirm the drop is real not a young-cohort artifact. Decompose funnel — which stage dropped? Slice by vendor (did routing change), document type, app version, time-of-day, source. Most "activation drops" trace to one explanatory dimension. Cross-reference with the change calendar — did we ship anything? Did a vendor push a model update?

Q11. How would you decide whether a new IDV vendor is "better"?

Show strong answer

Shadow-mode comparison first (call both, compare without changing UX). Then A/B at meaningful split. Primary: approval rate. Guardrails: fraud rate 90d post (proxy for false-accept), decision latency p95, customer complaints, manual-review carry-over. Stratify by document type — vendors differ wildly. Stop on guardrail breach.

Q12. How do you measure platform adoption?

Show strong answer

Count teams onboarded + % of net-new flows landing on-platform vs forked. Pair with platform NPS from consumer-team PMs and tech leads. Time-to-launch trend. Support-ticket volume per team. Open-migration count from legacy. Adoption is breadth × depth × goodwill, not API-call volume.

Section D · Design / PRD

Q13. Spec a vendor-abstraction layer in 90 seconds.

Show strong answer

Interface with create_session, get_decision, fetch_evidence, cancel. Normalized Decision schema — outcome, calibrated confidence, decision_id, evidence URLs. Adapter per vendor. Raw signals preserved in payload for audit. Capability flags for vendor-unique features. Shadow-mode rollout, then canary, then ramp.

Q14. Design a self-serve flow builder for downstream teams.

Show strong answer

Define stable primitives (verify_id, screen, collect_consent, set_tier). Flows are compositions of primitives with conditional branches. Config lives in consumer-team repos; CI validates the composition is legal under jurisdictional policy. Compliance review is on the flow def, fast because surface is small. Don't ship the builder until primitives are stable.

Q15. Spec the platform's audit-event firehose.

Show strong answer

Every state transition emits an event with stable envelope (event_id, type, schema_version, occurred_at, actor, correlation_id, causation_id) and typed payload. Append-only, encrypted, retention aligned to strictest applicable AML window. Consumers: compliance reporting, fraud, support, growth, ops. Causation chains let you replay an applicant journey.

Q16. Design the operations console for sanctions-hit dispositioning.

Show strong answer

Side-by-side applicant vs list-entry. Match-score with which fields matched. Prior dispositions for same entity. One-click outcomes with rationale capture. SLA per match-confidence band. Sample audit by second reviewer. Feedback loop into match-score calibration. Every action logged.

Section E · Stakeholder management

Q17. How do you partner with Compliance?

Show strong answer

Treat Compliance as a customer of the platform, not a constraint. Named counterpart per initiative. Standing weekly meeting kept even when nothing's burning. Compliance gets self-serve tooling (policy editor, audit search) so they don't need to file tickets. Joint OKRs where reasonable. Result: faster sign-off because they trust the rails.

Q18. A consumer team wants a bespoke flow. How do you say no without burning the relationship?

Show strong answer

Diagnose first — is it a real new primitive (build), a configuration they could do (point them at the config), or a one-off (decline with rationale)? If decline: cite the platform principle, propose an alternative path on-platform, offer to embed an eng for the launch if the alternative is harder. Document the decision publicly so the next requester sees the criteria.

Q19. How would you handle a Compliance veto three weeks before launch?

Show strong answer

First, listen. The veto is information. Diagnose: is it scope-shift, policy-misunderstanding, or a real gap? Carve-out launch if possible — ship the safe geos, defer the problematic one behind a flag. Schedule the fix with Compliance signed off. Postmortem captures why the veto came late — earlier engagement is usually the answer.

Q20. The Growth team wants more activation; Compliance wants stricter screening. How do you mediate?

Show strong answer

Bring data, not vibes. Quantify the false-reject and false-accept impact of the proposed threshold change. Identify the segment / vendor / jurisdiction where the tension is sharpest — the answer is usually local, not global. Make the trade-off explicit and instrumented. Often the right answer is a more nuanced threshold per segment, not a single global value.

Section F · Analytical / SQL

Q21. Write the SQL for D7 activation by acquisition source.

Show strong answer

CTE for cohort (filter created_at < today - 7 days to exclude young), CTE for first-activation (MIN occurred_at on first_deposit), outer query LEFT JOIN, group by source, compute activated_within_7d / cohort_n. Guard with NULLIF.

Q22. Why must you exclude the most recent cohorts in a D7 funnel?

Show strong answer

Because they haven't had 7 days to convert. Including them makes recent cohorts look worse than they are — purely a measurement artifact. Always cohort by entry-week and require equal observation time across cohorts.

Q23. How would you investigate a sudden spike in manual-review queue depth?

Show strong answer

Decompose: by referral source (queue type), by vendor (did one start referring more?), by document type, by jurisdiction. Cross-reference with the change calendar — vendor model update? Policy change? Look at score distributions — has the threshold quietly shifted? Often a single dimension explains the spike.

Q24. Estimate the impact of launching Brazilian KYC.

Show strong answer

Top-down: market size from public sources × realistic share for a foreign exchange entering. Bottom-up: comparable LATAM market launch curves applied. Apply our funnel rates discounted 20-30% the first quarter for new-market conversion drag. Revenue: activated × ARPU × retention. Costs: per-verification cost + local-language support + amortized compliance setup. Show ranges, not point estimates; flag regulatory timeline as the dominant risk.

Section G · Behavioral

Q25. Walk me through your background.

Show strong answer

Shape: edge (the unfair advantage you bring), one project (most recent, most relevant, what changed measurably), bridge (point at a specific JD line), honest gap (what you'd ramp on, with a concrete 90-day plan).

Q26. Tell me about a time you shipped a platform that didn't get adopted.

Show strong answer

STAR. Situation: the platform, the launch. Task: drive adoption. Action: what you tried; the diagnostic that revealed adoption is a sale not a mandate. Result: the change in approach (anchor partners, migration subsidy, declining to maintain forks). Lesson named — adoption planning is part of the platform, not after.

Q27. Tell me about a disagreement with engineering.

Show strong answer

Pick a substantive disagreement, not a personality clash. Describe the trade-off cleanly, the data that informed each side, how you escalated or didn't, what was decided, and whether it was right in retrospect. Showing you can change your mind is a stronger signal than winning.

Q28. Why crypto, why this company?

Show strong answer

Honest version: name something specific you find compelling (self-custody, settlement finality, the permissionless market) — not a manifesto. Name something specific about the company — the platform problem in the JD, the company's long track record, the regulatory posture. Avoid fake conviction; show curiosity and engagement with the product.

Section H · Day-in-life

Q29. What would your first 90 days look like?

Show strong answer

Days 1-30: Learn. 1:1s with Eng, Compliance, Ops, consumer PMs. Shadow ops reviews. Read AML risk assessment and postmortems. Map current capabilities. Output: current-state doc. Days 30-60: Diagnose. 3-5 gaps named. Strategy thesis drafted. Pre-wired with VP-Product and VP-Eng. Days 60-90: Commit. Strategy approved. First 90-day wave scoped with named partner and named unlock. Roadmap published.

Q30. What does a typical week look like for you in this seat?

Show strong answer

Weekly: 1:1 with eng lead, rotating consumer-PM check-in, platform team review. Biweekly: Compliance partnership review, stakeholder roadmap update. Monthly: IDX scorecard walkthrough, internal-customer roadmap review. Quarterly: strategy review with VP-Product and VP-Eng, OKR setting. Embedded in: writing strategy docs and platform briefs, defining metrics, refereeing source-of-truth questions, running postmortems.