Section A · Orient

The Role, Decoded

What "Smart Contract Security Engineer" actually means in practice, what each deliverable is for, and the constraints that shape every design answer in the loop.

The archetype, decoded

This role is the in-house owner of a DeFi protocol's security posture. Titles drift — Protocol Security Engineer, Smart Contract Security Engineer, Security Researcher, Formal Verification Engineer — but the work concentrates on the same set of skills:

  • Writing formal verification specs (Certora CVL, Halmos symbolic tests, Kontrol proofs) that prove protocol invariants hold on every input.
  • Doing internal security reviews on contracts before they go to external auditors — finding the cheap bugs in-house so external time buys you depth, not surface area.
  • Running the bug-bounty program end-to-end — triage, dedup, severity, comms with researchers, war rooms.
  • Building periphery contracts — wrappers, routers, hardened entry points — that extend the audited core safely.
  • Researching new attack vectors and tooling, writing them up, and presenting at conferences.

It is one of the highest-leverage technical roles in a DeFi org. A single missed bug can cost nine figures; a single proved invariant can shorten an audit by a week.

The full security lifecycle

You own every phase, not a slice. The phases (with rough deliverables) look like:

PhaseYour deliverableWhere the bar is
DesignThreat model, invariant list, scope freezeDid you enumerate adversary classes before code existed?
ImplementationInternal review, CVL specs, Foundry invariantsDid the spec catch a bug before external eyes?
External auditAudit prep pack, response triage, fix verificationDid you shorten the audit because the auditors found less low-hanging fruit?
Pre-launchMainnet rehearsal, deployment review, multi-sig drillWas the deployed bytecode bit-for-bit what was audited?
LiveMonitoring, bounty triage, incident drill, commsHow fast do you go from alert to war room?
IncidentWar-room SOP, patch, comms, postmortemIs your postmortem a public document the community trusts?

If you've only worked one phase (e.g., "I'm an auditor, I write reports"), you should be honest about that — and you should be ready to reason about the others. See 02-positioning-from-scratch.

The shift from external auditor to in-house engineer

A lot of people land in this role from one of two starting shapes. The interview hits both shapes differently.

From auditor

You're great at finding bugs and writing reports. The new muscle is designing attack surface out rather than describing it on the way out. You'll have to learn the codebase the way a maintainer does — not the way a visitor does. You'll have to commit code (periphery contracts, CVL specs) that other engineers depend on. You'll have to make tradeoffs, not just flag them.

From protocol engineer

You know the codebase cold. The new muscle is adversarial framing at a level where you assume every external boundary is malicious by default, and where you can write CVL/Halmos/Kontrol fluently — not just read it. You'll also have to learn the audit-firm vocabulary and the bounty platform mechanics from the customer side, not the procurement side.

Interview panels are mixed. Expect at least one panelist who came from each shape; both will probe the side you didn't.

Greenfield setup vs inherited program

A meaningful split exists between roles framed as "build our security program from scratch" and roles framed as "step into a mature program."

  • Greenfield. No existing CVL specs, no bounty program, no SOPs. The team has been relying on external audits and a quiet inbox. Your first 90 days: pick the first 10 invariants, get one of them proven, file a bounty program on Immunefi or Hats, write the war-room SOP, do one tabletop drill.
  • Inherited. There's already a CVL repo, an external auditor on retainer, a published bounty. Your first 90 days: shadow the existing triage queue, get an invariant landed in the existing spec suite, review one PR adversarially per day, learn the team's incident playbook by stepping into someone else's pager.

The greenfield variant is more common at smaller, fast-moving protocol teams; the inherited variant is more common at larger DAOs and audit firms moving in-house staff to a specific protocol team. Listen to the JD framing — words like "set the standard, not inherit someone else's approach" telegraph greenfield.

The "shorten audit cycles" mandate

You will see this phrase, or something close to it, in nearly every senior in-house security JD. It means:

  • External audit time is expensive. A Spearbit / Cantina / Trail of Bits engagement of useful depth runs 4-12 weeks and tens to hundreds of thousands of dollars.
  • Auditors finding cheap bugs is wasted time. If they spend the first week on visibility modifiers, ABI decoding mistakes, or stale OpenZeppelin imports, you bought retail-priced senior bug-hunting time at wholesale prices.
  • Your job is to push the floor up. When the auditors start, the bar should already be "no low/medium issues are findable by Slither, by mutation testing, by a careful read." That leaves their time for the deep stuff — economic attacks, novel vectors, invariant boundary cases.

Concretely: it means you ship a clean codebase, a clean spec, a clean threat model, and a clean test pack before the audit kickoff call. The faster you can get to "auditors find only critical/high novel issues," the more value the protocol gets per audit dollar.

The concrete deliverables

In the loop, you may be asked "what would you ship in your first 90 days?" — be specific. The deliverable set looks like:

  • CVL spec file(s) — proven invariants on the protocol's core (e.g., "no bad debt," "total shares = sum of user shares," "rate is bounded").
  • Internal security review notes — markdown files in the protocol monorepo for each PR/feature, with reasoning chain.
  • Audit prep pack — scope doc, NatSpec coverage, invariant list, threat model, prior-audit findings, known issues, deployment plan.
  • Bug-bounty SOPs — triage runbook, severity rubric, war-room playbook, communication templates.
  • Periphery contracts — small, well-tested wrappers (oracle wrappers, circuit breakers, multicall hardening). Sometimes deployed as the only thing users actually call.
  • Public research — blog posts, devcon talks, public PoC repos. The protocol's reputation in the security community partly belongs to you.

Who you work with

This is rarely a solo role even when there's only one of you. Expect to coordinate with:

  • Protocol R&D engineers — the people writing the code you secure. Tight feedback loop, often daily.
  • External audit firms — Trail of Bits, OpenZeppelin, Spearbit, Cantina, ChainSecurity, Zellic, Sigma Prime, Halborn. You manage the engagement, not just attend it.
  • Formal verification partners — Certora is the dominant commercial vendor and most teams have weekly calls with them; Halmos and Kontrol are open-source and you maintain those yourself.
  • Bounty platforms — Immunefi, Hats Finance, Cantina; you set scope and triage tier-1.
  • Independent security researchers — bounty hunters, sometimes anonymous, who deserve fast and respectful triage.
  • Governance / DAO ops — for upgrades, timelock changes, emergency pauses.

Soft signals in the JD

  • "Greenfield security role" → you set norms; expect questions about how you'd structure a security program from zero.
  • "Deliberately minimal codebase" or "~N lines of code" → formal verification is taken seriously here; CVL depth questions are likely.
  • "Short feedback loop with research engineers" → review velocity matters more than perfect reports.
  • "Represent us at conferences" → public research is part of the job, not a stretch goal.
  • "End-to-end ownership of bug bounty" → expect triage scenarios in the loop.
  • "Strong written communication" → vulnerability reports and research articles are part of the bar. They will read your past writing.

What to ask them

Strong, role-fit questions to have ready:

  1. "What's the current state of the CVL spec suite — number of invariants proven, where the gaps are, what's blocked on summarization or solver state?"
  2. "How does the team currently triage bounty reports — initial response SLA, how is severity adjudicated, do you have a war-room SOP that's been exercised?"
  3. "When was the last external audit, who did it, what were the unfixed findings, and what's your bar for retiring a finding from 'open' to 'wontfix'?"
  4. "What's the relationship with Certora (or Halmos / Kontrol) like today — weekly call, on-call SMT solver wrangling, who owns the spec?"
  5. "Where does the periphery end and the core begin in your codebase, and who decides when new functionality lives in the audited core vs in a periphery contract?"
  6. "What's the team's posture on retroactive bounty payouts and disclosure timing? Do you publish public postmortems by default?"

These show you're thinking about the role as a program owner, not as a bug-finder for hire.