Practice Interview Questions
28 questions across 8 sections. Drill these out loud. Toggle drill mode to hide answers; track your progress.
How to drill
Click Drill mode to hide all answers. Speak the answer out loud (yes, awkward, do it anyway). Then expand to compare. Mark each one as you go.
The goal isn't memorization — it's fluency. If your spoken answer hits the major beats in the reveal, you're ready.
Two full passes spread over a week. First pass: read, expand, learn. Second pass: drill mode on; only expand if stuck.
FV / Certora
Q1. Explain CVL's rule vs invariant in one minute.
A rule is a property that holds after exactly one method call from some initial state. CVL checks: does the post-state satisfy the assertion? An invariant is a property that holds in every reachable state. CVL proves invariants inductively — true at construction, preserved by every public method. Practically: rules are for stuff like "after deposit, shares went up by X." Invariants are for "the protocol is never in a bad state, period."
Q2. What's a parametric rule and why is it powerful?
A rule that takes method f as a parameter; CVL expands it across every public method of the contract. One rule covers the whole external surface. Classic use: "no method other than admin can change parameter X" — a single 10-line rule checks every entry point.
Q3. Your CVL spec is timing out on a key invariant. What's your sequence of debugging moves?
- Check whether it's solver timeout vs unknown — different fixes.
- Try simpler solvers (z3 vs cvc5) — sometimes one wins where the other loses.
- Link external contracts so calls aren't fully symbolic.
- Summarize complex non-linear math (e.g., the rate function) with a sound but simpler model.
- Carry already-proven invariants via
requireInvariant. - Split the rule's parametric expansion: filter out methods that prove fine separately.
- Reduce
loop_iterand check if it's a loop-unroll problem. - As a last resort, decompose the invariant into smaller invariants.
Q4. Where does formal verification clearly NOT win?
Game-theoretic / economic properties (oracle manipulation profitability, MEV outcomes, bridge validator collusion), cross-protocol composition where you can't model the other protocol, gas-related properties (the solver doesn't reason about gas), and properties that depend on off-chain or human behavior. FV catches "the code does what it says"; it doesn't catch "what the code says is the wrong thing."
Q5. Halmos vs Certora — when would you pick which?
Certora for serious commercial work: better tooling, hosted infra, hooks/ghosts, support team, mature optimizer. Halmos for OSS-only stacks, zero-setup checks in CI, and when your properties express well as Foundry tests. Many serious teams use both — Certora as the main spec store, Halmos as a fast CI gate on a subset.
EVM / Solidity gotchas
Q6. Why does tx.origin kill an authorization check?
tx.origin is the EOA at the start of the call chain. If a user is tricked into calling a malicious contract, that contract can then call your contract, and your require(tx.origin == owner) check passes — because tx.origin is still the legitimate owner. Use msg.sender.
Q7. A contract uses delegatecall to a library. What can go wrong?
Storage collision (library writes assume a layout that conflicts with the caller); msg.sender is the proxy's caller, not the proxy — surprising for some patterns; if the library can be replaced or contains selfdestruct, the caller's logic disappears (Parity multisig 2017). Mitigation: lock library / use immutable address; check storage compatibility; use EIP-1967 slots for proxy state.
Q8. How does storage collision happen in an upgrade and how do you prevent it?
If V2 inserts a state variable above an existing one (or reorders them), every slot after the change shifts. Reads return garbage; writes corrupt unrelated state. Prevention: only ever append new state variables in upgrades; use storage-layout diff in CI (forge inspect storage-layout); use OZ upgrades plugin or Diamond Storage to isolate.
Q9. What's the "return bomb" pattern?
A malicious callee returns massive returndata. The caller's returndatacopy (implicit in Solidity's external call) consumes huge gas, potentially OOG-ing the caller. Mitigation: assembly that caps the returndata size before copying.
Attack vectors
Q10. Walk through read-only reentrancy.
Function A makes external call mid-flow before all state updates. The callee, instead of re-entering A, calls a view function on the original contract. The view returns stale or inconsistent state because A's writes haven't completed. Other protocols that rely on the view get poisoned data. Curve 2023 hit several lending protocols this way. Mitigation: view functions revert under reentrancy lock, or external consumers don't trust views from within state-modifying calls.
Q11. Explain a donation attack on an ERC-4626 vault.
Empty vault. Attacker deposits 1 wei → 1 share. Attacker donates 10000 underlying tokens by direct transfer to the vault contract. Vault now has 10001 assets, 1 share. Next victim's 9999-token deposit rounds to 0 shares (because 9999 * 1 / 10001 == 0). Victim deposit captured. Mitigations: virtual shares / decimal offset (OZ default), dead shares to address(0), minimum first deposit, internal accounting that doesn't read balanceOf.
Q12. Why is using spot AMM price as an oracle dangerous?
A flash loan gives an attacker unbounded capital within one block. They swap into the AMM to move the mid-price, read the manipulated price as oracle, exploit a downstream protocol that trusted that price, then swap back. Net cost: AMM fees only. Cream/bZx-style. Mitigation: don't use spot. Use Chainlink with staleness/deviation checks, or use TWAPs with windows longer than a block.
Q13. What's "frontrunnable initialize" and how do you prevent it?
Proxy deployed; implementation has an initialize() not yet called by the deployer; an attacker calls initialize() first, becomes admin. Prevention: deploy and initialize in the same transaction (factory pattern); call _disableInitializers() in the implementation's constructor to lock the implementation; for proxies, lock initialization to the deployer.
Audit methodology
Q14. Walk me through how you review a new 2000-LOC lending protocol in 10 days.
Day 1-2: README, threat model, spec docs. List external functions per contract. Draw trust boundaries. Identify the 10-20 invariants the protocol must hold. Day 3-6: top-down review — money flows, state machines, access control matrix. Day 7-8: bottom-up — every line, especially the math and assembly. Day 9: cross-protocol composition risks, oracle paths. Day 10: write up findings with severity rationale; deliver report. Throughout: Slither and Echidna runs in background.
Q15. How do you write a high-severity finding that gets fixed in 24 hours?
Title leads with the fix ("Reentrancy in withdraw — add nonReentrant"). One-sentence summary. Concrete impact in dollar terms. Likelihood + preconditions. Affected file and line numbers. Reproducible PoC (Foundry test). Recommendation with two or three options. Severity rationale matched to a standard rubric.
Q16. What's the "shorten audit cycles" mandate in practice?
Pushing the floor up before external auditors arrive: clean Slither runs, mutation-tested test pack, CVL invariants proven on core properties, no known low/medium issues. The auditor's time becomes spend on novel and economic / game-theoretic bugs rather than rediscovering visibility modifiers. Measurable: audit duration shrinks; finding density (high+) per kLOC drops.
Bounty & war-room ops
Q17. A bounty researcher submits a critical with a working PoC. Walk through the next 24 hours.
T+1h: acknowledge in the platform; set 4-hour update expectation. T+1-4h: reproduce against local fork. If reproduces: declare war room, page secondary on-call. T+4h: severity decision committed. Multi-sig signers alerted. Consider pause. T+12h: patch in PR. T+24h: independent re-review. Researcher kept in the loop the whole time. Payout and disclosure schedule communicated.
Q18. A researcher disputes your severity downgrade and threatens to publish. What do you do?
Don't escalate emotionally. Re-examine the reasoning honestly. If they're right, upgrade and apologize. If they're not, write a clean explanation of the rubric and the preconditions that drove the downgrade — name the bounty platform's published rubric if applicable. Offer a goodwill bonus if reasonable. As a last resort, accept that disclosure may happen and prepare the public comms in advance.
Q19. Design a war-room SOP from scratch for a 5-person protocol team.
Trigger: any credible threat-to-funds report from monitoring, bounty, or community. Roles: Lead (technical decisions, comms approval), Comms (public statements, separate from Lead), Engineering (patch and review), Multi-sig (ready to execute pause). Phases: triage 0-30min, containment 30min-2h, investigation hours, patch days, recovery days, disclosure 1-4 weeks. Comms discipline: one voice, fast acknowledgments, slow commitments. Drill quarterly.
Q20. What's your initial response SLA on a critical bounty report?
Industry-standard senior posture: acknowledgment within 1 hour, initial severity assessment within 24 hours, payout within 2-4 weeks (KYC permitting), public disclosure within 30-90 days of fix. The 1-hour acknowledgment matters: researchers talk, and a 12-hour silence kills your bounty program's reputation.
Design & code review
Q21. Design a hardened oracle wrapper around Chainlink.
Three checks at minimum: staleness (revert if block.timestamp - updatedAt > maxStaleness), invalid (revert if price <= 0 or answeredInRound < roundId), deviation (revert if move exceeds threshold from last accepted price). Storage: immutable for feed address and thresholds; one lastPrice slot. No admin (or two-step admin with timelock). Pause / circuit-breaker on extreme deviation.
Q22. When would you put a feature in the periphery vs the core?
Default to periphery. Move to core only if (a) the core can't enforce its invariants without it, or (b) the feature must hold user funds (and the periphery can't safely). The bar for adding to the audited core is much higher than adding to the periphery because every line of core requires re-audit; periphery can iterate.
Q23. Critique this pattern: balances[msg.sender] += msg.value; (bool ok,) = msg.sender.call{value: msg.value}(""); require(ok);
Wait — that's deposit, not withdraw, and the external call is pointless. Bigger issues: (1) why are you calling msg.sender after a deposit? smells like a callback pattern that opens reentrancy on whatever runs after; (2) if this is part of a larger function, the external call before other state changes is the bug. The pattern is suspicious regardless. In a real review I'd ask for context, then flag it as "unusual pattern — explain or remove" by default.
Q24. What's wrong with require(block.timestamp >= deadline, "EXPIRED") in a swap router?
The comparison is backwards — should be <=. deadline is a future timestamp by which the call must complete; the require should be "now must be before the deadline." With >= the swap fails until the deadline passes — opposite of intent.
Behavioral
Q25. Tell me about a time you disagreed with an auditor's finding.
Structure your answer: (1) the finding, (2) why you initially disagreed, (3) the resolution. Aim for a story where you took the disagreement seriously, investigated honestly, and either changed your mind (best — shows live learning) or convinced the auditor with concrete evidence (also strong — shows technical depth). Avoid stories where the auditor was "just wrong"; those play badly.
Q26. You've found what might be a critical bug in production. The protocol team disagrees about whether to pause. How do you navigate?
Name the asymmetry: pausing has reversible costs (user friction, frontend confusion); not-pausing has irreversible costs (funds gone). The bar for pausing should be lower than the bar for full conviction. Make the pause decision quickly, then reason about the bug under the calm of paused state. Document the pause rationale publicly. If you're wrong, unpausing in a few hours is a survivable mistake; if you're right and didn't pause, the protocol is on Rekt News.
Ecosystem
Q27. Name three audit firms and what they're known for.
Trail of Bits — broad breadth, maintainers of Slither/Echidna/Medusa, strong static-analysis depth. OpenZeppelin — audits plus the standard library; pairs well with their upgrade tooling. Spearbit / Cantina — marketplace model with curated senior auditors. (Acceptable substitutes: ChainSecurity for formal methods leaning; Zellic for cryptography / novel VMs; Sigma Prime for consensus / staking; Halborn for breadth.)
Q28. Summarize the Beanstalk incident in 3 sentences.
An attacker flash-loaned BEAN governance tokens to acquire majority voting power within a single transaction. They proposed and immediately executed (because Beanstalk had no time delay on emergency proposals) a malicious proposal that drained the protocol's reserves. Lesson: voting power must snapshot at proposal time, not execution time, and even "emergency" governance paths need adversarial review of the speed/safety tradeoff.