Deep Dive — Audit Methodology
How a senior security engineer actually conducts a review, scores findings, and ships reports that change behavior. The methodology you'd use as the in-house auditor and the one you'd want a firm to apply to your code.
Why methodology matters
Anyone with enough time and Solidity background will find some bugs. A senior security engineer finds the bugs that matter, in a way that's reproducible across reviews. That's methodology — a system you apply, not a vibe you bring.
In the interview, you'll be asked to walk through how you review a codebase. The answer should be specific, ordered, and explain the why behind each step.
Pre-audit preparation
The work that determines whether the audit produces value happens before any auditor opens the repo.
1. Scope freeze
Lock the commit hash that's being reviewed. Any new commits land in a separate branch. Auditors are reviewing a snapshot, not a moving target. If you must change scope mid-audit, change the contract, document it, and pay the auditor for the re-review.
2. NatSpec coverage
Every external function should carry a @notice (what users see) and @dev (what auditors need). Missing NatSpec on a function with surprising semantics is a known cause of "wasted audit hours."
/// @notice Borrow `assets` against existing collateral.
/// @dev Reverts if resulting position is unhealthy. Caller must pre-approve.
/// @param assets Amount in underlying token decimals.
/// @param onBehalf Recipient of the borrow; reverts if not authorized.
/// @return shares Borrow shares minted to the position.
function borrow(uint256 assets, address onBehalf) external returns (uint256 shares);3. Invariant list
A short doc — even 10-20 bullets — stating what must always be true. Hand it to the auditor on day 1. Their job is to find the inputs that break the invariants; yours is to enumerate the invariants.
4. Threat model
Who are the adversaries? What can they control? What's their budget? A reasonable starter:
- Random user — can call any external function with any input.
- Whale — same plus large balance, can move oracle if exposed.
- MEV searcher — same plus block-position control.
- Compromised admin key — assume worst case.
- Composed protocol — assume any contract called externally is adversarial.
- Bridge / cross-chain — assume message ordering is adversarial.
5. Test pack and CVL spec
Tests pass on the audited commit. CVL spec verifies. Coverage report attached. The auditor shouldn't have to redo your unit testing.
6. Known issues doc
Bugs you've already found, deferred, or accepted. Auditors hate finding things you already knew about; they love being told upfront so they can focus on novel issues.
The actual review process
Two complementary approaches you should be able to describe:
Top-down
- Read the README and threat model first. Understand intent before code.
- Map the contract surface. List all externally-callable functions across the system. Group by role: user, admin, keeper.
- Trust boundaries. Draw a diagram: which addresses are trusted, which are not, where the trust transition happens.
- Money flow. Trace tokens in: deposit, mint, repay. Trace tokens out: withdraw, redeem, borrow, liquidate. Every flow should have a checkable invariant.
- State machine. What states can a position be in? What transitions are valid? Are any transitions missing checks?
Bottom-up
- Read every line. Don't skim. Every line is a chance for a bug.
- Annotate as you go. Use a tool like solidity-shell or just a markdown file. Mark suspicious patterns with severity guesses.
- Cross-reference with the storage layout. Every state variable should be touched in only the functions you'd expect.
- Inline assembly second pass. Walk assembly blocks separately. Cross-check against the equivalent high-level Solidity.
Adversarial paths
For each externally-callable function, ask the standard set:
- Can the caller be anyone? If so, what's the worst input?
- Does it make external calls? If so, what reentrancy risk?
- Does it read prices/oracles? Can they be manipulated within the call?
- Does it modify shared state? Is the modification atomic w.r.t. other functions?
- Does it transfer tokens? Does it use the actual received amount or the requested amount?
- Does it round? In whose favor?
- What happens on first call (zero-state)? Last call (depleted state)?
- Is there a frontrunnable initialize / first-action?
Severity assessment
The standard model: severity = impact × likelihood. Different shops weight differently, but the rubric is shared in spirit.
| Level | Impact | Likelihood | Examples |
|---|---|---|---|
| Critical | Direct loss of user/protocol funds | Anyone, no preconditions | Unbounded mint; reentrancy drain; broken access control on withdraw |
| High | Loss with prerequisites OR catastrophic DoS | Anyone with capital / position | Donation attack on empty vault; oracle manipulation in single block; admin lockout |
| Medium | Loss in narrow conditions OR meaningful protocol degradation | Specific actor / state | Rounding error accumulating to dust; griefing via gas; suboptimal interest curve |
| Low | Limited impact; best-practice violation | Limited | Missing NatSpec on dangerous function; wrong event emitted; unused import |
| Informational | No security impact | — | Style; typo; gas optimization |
Different platforms use slightly different rubrics. You should be conversant with at least these:
- Immunefi rubric — explicit USD-loss tiers; depends on protocol-published bounty cap.
- Code4rena — three-tier (H/M/L) with auditor-judge adjudication.
- Sherlock — strict criteria for "high" requiring specific impact statements.
- Spearbit / Cantina — five-tier with explicit "impact × likelihood" matrix per finding.
Vulnerability classification
Knowing the canonical taxonomies signals professionalism. The ones you should know cold:
- SWC Registry (Smart Contract Weakness Classification). 130+ entries: SWC-101 integer overflow, SWC-107 reentrancy, SWC-115 tx.origin auth, etc. Older but still cited.
- DASP Top 10 (Decentralized Application Security Project). Reentrancy, access control, arithmetic, unchecked low-level calls, DoS, bad randomness, front-running, time manipulation, short address, unknown unknowns.
- OWASP Smart Contract Top 10 — newer, less established but referenced.
- In-house taxonomies — every mature firm has its own. Trail of Bits' "Building Secure Smart Contracts" categories, OpenZeppelin's "common pitfalls," etc.
For your own protocol, maintain a small internal taxonomy of protocol-specific bug classes — the ones that recur in your code. This is one of the highest-value internal docs to maintain.
Writing a vulnerability report that gets fixed
A good finding has a fixed shape. Lazy reports get bounced. Sharp reports get fixed in 24 hours.
# [HIGH] Donation attack inflates share price on empty vault
## Summary
A first depositor can be front-run by an attacker who donates underlying
tokens directly to the vault, inflating the share price and capturing the
first depositor's deposit as rounding loss.
## Impact
Loss of user funds. Worst case: complete loss of the first deposit. The
attack costs the attacker only their donation; they earn the depositor's
shortfall.
## Likelihood
High in practice. Mempool monitoring identifies pending first deposits;
the attack is profitable on any non-trivial deposit.
## Affected code
contracts/Vault.sol#L142-L168 (function `deposit`)
## Proof of concept
[runnable Foundry test demonstrating the exploit; see attached repo]
## Recommendation
1. Mint initial dead shares to address(0) on first deposit (1000 wei),
making donation attacks unprofitable.
2. OR enforce a minimum first-deposit amount.
3. OR use virtual shares / virtual assets (OpenZeppelin ERC4626 pattern).
## Severity rationale
Per Immunefi rubric: direct loss of user funds without preconditions
beyond mempool observation. High, not Critical, because the loss is
capped at the first deposit, not the entire vault.The structure: title with severity, summary, impact, likelihood, affected code with line numbers, PoC, recommendation(s) (more than one if possible), severity rationale. A senior security engineer's reports are scannable in 30 seconds.
Lead with the recommendation in the title when possible. "Reentrancy in withdraw — add nonReentrant" gets fixed faster than "Potential reentrancy concern in withdraw flow".
The post-audit cycle
Reports land. Now the real work:
- Triage. Within 24-48 hours, the protocol team reviews every finding. Acknowledge, dispute, or accept.
- Fix in a branch. Each finding gets a commit. The commit message should reference the finding ID.
- Re-review. The auditor confirms the fix. Sometimes a fix introduces a new bug; re-review catches it.
- Update the test pack and CVL spec. Every fixed bug should have a regression test. Many should add an invariant if one didn't already cover the property.
- Publish. Public audit reports are a credibility signal. Hold any open findings until they're fixed or accepted with rationale.
- Update the internal taxonomy. Add the new bug class so the next review checks it as a default.
Doing it on your own code
Internal review is your day job. Tactics that work:
- PR adversarial review. Every protocol PR gets a "what could go wrong" pass from security. Not approval — security adds questions and concerns to be answered before merge.
- Diff review against last audit. When you've added 200 lines since audit, you need to know which lines are new.
git diff audit-commit..HEAD --statis the start. - Mini-audit per feature. A 4-hour focused review on every new feature merged. Cheaper than waiting for the next external engagement.
- Specs land with features. Every PR adds a CVL invariant or a Foundry invariant test. The bar grows monotonically.
- Adversary-of-the-week. Pick one threat model adversary, hunt only for their attacks for a sprint. Forces depth over breadth.