Core Fundamentals — Platform PM Craft
The set of beliefs and habits that separate Platform PMs from feature PMs. Internal customers, self-serve as a mindset, API-as-product, versioning discipline, adoption metrics. The fundamentals interviewers will probe to test whether you've actually done this work.
Feature PM vs Platform PM — the daily differences
Same title, different operating model. The differences are subtle in the abstract and brutal in practice.
| Dimension | Feature PM | Platform PM |
|---|---|---|
| Customer | End user | Internal team / next PM |
| Deliverable | UI flow, feature toggle | API, event schema, SDK, config primitive |
| Success metric | Engagement, conversion, retention | Adoption, time-to-launch, platform NPS, vendor cost |
| Roadmap horizon | Quarter | 2-4 quarters, sometimes year+ |
| Deprecation cost | Sunset banner | Migration project across N teams |
| Failure mode | Low engagement | Adoption stalls; teams fork |
| Doc deliverable | Launch announcement | Reference docs, migration guide, sample code |
| Stakeholders | Design, eng, marketing | Eng, multiple PM peers, compliance, ops, finance |
The internal-customer mindset
Your customer is another PM. Internalize the implications:
- They have their own roadmap with their own pressures. Your platform is one line of their plan, not the whole plan.
- They will fork or build their own if you're too slow, too rigid, or too opinionated in the wrong direction.
- They benchmark you against the bespoke alternative — which is fast at first, painful at year two.
- They read your roadmap. If you announce something they're waiting on and slip, they remember.
- They will route around you if you make them feel disrespected. Especially in compliance/regulated contexts where their job is on the line.
"My platform succeeds when the team next to me ships something I didn't write and didn't approve. If they need my permission, it's not a platform — it's a service."
Internal-customer interviews — yes, you do these
One of the unintuitive moves: you run user research on PMs. The cadence and shape:
- Discovery interviews when scoping a new primitive. 30 minutes with the PM, 30 with their tech lead, 30 with an operations lead if relevant. Open-ended; you're listening for the work-around they've built.
- Pain-point ladders — "what's the single hardest thing about launching a new market today?" Drill three levels.
- Beta partners — when you ship something new, recruit 2-3 friendly internal customers explicitly. They get white-glove support; you get tight feedback.
- Quarterly health pulse — 5-question survey to every platform consumer PM and tech lead. Time-to-launch, platform NPS, "what would you build that you can't today?"
If you can't tell an interviewer the last three internal-customer conversations you've had and what you changed, you don't sound credible as a Platform PM.
The self-serve mindset
Self-serve is not a feature you ship; it's an operating philosophy. The test: can a downstream team launch a new onboarding flow in your platform without filing a ticket?
Levels of self-serve maturity, useful to grade your platform on:
| Level | What it means | Example |
|---|---|---|
| L0 — Hand-built | Every new flow requires platform-team eng work | "File a ticket; we'll get to it in Q3" |
| L1 — Configurable | Common shapes change via config; uncommon ones still require code | YAML config in platform repo, PR-reviewed |
| L2 — Self-serve config | Downstream teams own their config without platform review (except policy gates) | Each team has a config directory; platform CI validates shape |
| L3 — UI / no-code | Non-engineers can configure via UI | Compliance lead launches a new market without an engineer in the loop |
Don't aim for L3 on day one. Aim to know which capabilities live at which level, and have a credible path to move the highest-leverage ones up.
API-as-product — the platform's "UX"
For a platform, the API is the UI. The same things that matter in consumer UX — discoverability, predictability, error legibility, recovery — matter in API design. The set of rules a Platform PM should care about:
- Naming — verbs and nouns should match the team's mental model.
verifications.createnotv2/idv/create_new_verification_record. - Idempotency — every mutating call takes an idempotency key. Retries don't double-charge or double-create.
- Error taxonomy — every error has a stable code, a human-readable message, and a documented next-step. Errors are part of the product, not a leak of the implementation.
- Pagination — cursor-based, stable across mutations.
- Webhooks / events — every state change emits an event with a stable schema and a retry contract.
- Rate limits — published, predictable, per-key not per-IP.
- Sandbox — a test mode that exercises the same shapes without making real vendor calls.
// Example: clean onboarding-platform error shape
{
"error": {
"code": "verification.tier_not_eligible",
"message": "Customer is not eligible to upgrade to KYC tier 2 from current tier.",
"details": {
"current_tier": "basic",
"requested_tier": "enhanced",
"missing_requirements": ["address_proof", "source_of_funds"]
},
"documentation_url": "https://docs.internal/onboarding/errors#tier_not_eligible",
"retry_strategy": "do_not_retry",
"support_contact": "onboarding-platform-oncall"
}
}
Documentation as a deliverable
Platform PMs ship docs the way feature PMs ship UI copy. The bar:
- Getting Started — a runnable 5-minute path that produces a successful verification in sandbox.
- Concepts — what is a verification, an applicant, an event, a tier. Mental model first.
- How-to guides — task-shaped: "launch a new market," "add a new IDV vendor," "configure a tier upgrade."
- API reference — autogenerated from schemas; every endpoint has at least one example request + response.
- Operational playbooks — what happens when a vendor goes down, how to fail over, who's on-call.
- Compliance crosswalks — for each platform capability, the regulatory citation it implements.
If your platform's docs read like a wiki, you have not yet shipped a platform. You've shipped a service with a long thread.
Backward compatibility & versioning
The single discipline most often missing from feature-PMs-going-platform. The rules:
- Additive changes are free — add a new optional field, new endpoint, new event type. Always safe.
- Removing or renaming is a deprecation, not a change. Announce, mark deprecated, instrument usage, migrate consumers, then remove. Six months minimum, usually a year.
- Semantic changes (the field exists but now means something different) are the worst. Don't. Add a new field.
- Versioning — pick one strategy (URL versioning
/v2or date-based headers) and hold it. Don't proliferate v3-v4-v5 by accident. - Deprecation policy is published and the same for everyone. No private deprecations.
In regulated onboarding, even an "additive" field can break compliance — if your downstream team starts populating source_of_funds and you stop validating it, you've just changed the implied control. Treat schema changes as policy changes when they touch regulated fields.
Platform vs ecosystem
Useful distinction interviewers occasionally probe:
- Platform — internal-only consumers. Compliance and contract are easier because everyone reports up to the same CEO.
- Ecosystem / public platform — external developers. The same primitives need to survive contractual scrutiny, abuse-prevention, and you no longer control your customer.
This role is platform-not-ecosystem, but the rigor of an ecosystem mindset bleeds upward — if your APIs are clean enough for external developers, they're definitely clean enough for the team next door.
Adoption metrics that matter
Platform success is adoption, not engagement. Specific metrics to know cold:
| Metric | What it tells you |
|---|---|
| Number of teams onboarded | Breadth of adoption |
| % of new flows on the platform (vs forked) | Are net-new flows defaulting to your platform? |
| Time-to-launch a new market or flow | The platform's "speed of light" for downstream teams |
| Platform NPS (internal) | Goodwill — leading indicator of churn |
| API call volume / event volume | Health of integration, not always success |
| Cost per verification | Unit economics; vendor mix health |
| Number of open migration projects from legacy | How much technical debt you're paying down |
| Number of support tickets per consumer team | Friction; falling = good |
Pair every metric with a target and a denominator. "Time-to-launch fell by 50%" is meaningless without "from 12 weeks to 6 weeks across 8 launches."
Common anti-patterns to call out
- The "build it and they will come" platform — clean APIs, zero consumers. Adoption is sales. Schedule the customer interviews before the architecture.
- The over-fitted platform — built for the first customer, can't accept the second. Generalize on the second use case, not the first.
- The bespoke-per-customer platform — every consumer team got special-cased. Now it's an unmaintainable kit of forks.
- The roadmap-by-loudest-voice — the team that complains gets prioritized. Use a public, criteria-based intake process.
- The hidden tier system — some teams get white-glove, others get docs, but you've never published the rules. Be explicit.
- The Apollo-program release — a 12-month rewrite with no incremental value. Carve into 90-day deliverables that each unlock a real customer.
Naming an anti-pattern from your own past, with what you'd do differently, is a strong interview move.