Section D · Production

Deployment & Ops

Security at deploy time and after. The window between "code is audited" and "code is live" is where many incidents are born; the window between "we know we're being attacked" and "we've responded" is where reputations are made.

Deployment-time security gates

The default protocol-team checklist for "is this ready to deploy" should include security gates that block deploys, not advise on them.

  • Commit hash matches audited. Every deployable artifact tied to a specific commit. Deploy script verifies the hash.
  • All audit findings closed or explicitly accepted. No "we'll fix that later." A signed-off acceptance with rationale is fine.
  • CVL spec passes. No proven invariants regressed.
  • Foundry invariant tests pass. Long-running fuzz campaign completed.
  • Storage layout snapshot diff. For upgrades: zero unexpected changes; only additions at the bottom.
  • Slither clean (or all detectors triaged with rationale).
  • Bytecode diff. Compiled bytecode matches the bytecode previously deployed to a staging chain — bit-for-bit if possible.
  • Multi-sig signers verified. Required signers' machines updated; ledger firmware current.
  • Pause / breaker functional. Tested on testnet before mainnet deploy.
  • Monitoring configured. Forta bots / Tenderly alerts on the new addresses before they go live.

Diff between audited code and deployed code

The most insidious post-audit incident: code is audited at commit X, but commit Y is deployed. Auditors don't know about the changes in X..Y. The bug exists in those changes.

Standard countermeasures:

  • Pin the audited commit in the audit report. Every report has a commit hash on the cover page.
  • Tag the audited commit in git. git tag audit-2026-q2.
  • If you change post-audit: diff against the tag, list every change, send the diff to the auditor for re-review even if small. Many incidents have shipped because "we just added a small thing post-audit."
  • Pre-deploy ceremony. Whoever runs the deploy script reads aloud "deploying commit X, audited by Y on date Z, fix-commits A,B,C re-reviewed on date Q" to a second person.
# In your deploy script
AUDITED=$(git rev-parse audit-2026-q2)
CURRENT=$(git rev-parse HEAD)
if [ "$AUDITED" != "$CURRENT" ]; then
  echo "WARNING: HEAD != audited commit"
  git log $AUDITED..$CURRENT --stat
  read -p "Continue? "
fi

Verification on deploy

Contract verification (on Etherscan / Sourcify / Blockscout) is non-negotiable for any user-facing contract. Without it, users can't read the source they're interacting with. Standard workflow:

forge script script/Deploy.s.sol \
    --rpc-url $RPC \
    --broadcast \
    --verify \
    --etherscan-api-key $ETHERSCAN_KEY

# Or after the fact:
forge verify-contract $ADDR src/Market.sol:Market \
    --chain-id 1 \
    --etherscan-api-key $ETHERSCAN_KEY

Things to verify beyond the source:

  • Compiler version matches audit.
  • Optimizer settings match (runs count, via_ir flag).
  • Constructor arguments correct.
  • Source verified on multiple explorers if multi-chain.

CREATE2 prediction and front-running of deploys

CREATE2 lets you predict a contract's address before deploy: address = keccak256(0xff ++ deployer ++ salt ++ keccak256(initcode))[12:]. Useful for cross-chain consistent addresses (factory pattern).

Security implications:

  • Address front-running. If a salt is predictable, an attacker can deploy at the same address on a chain where you haven't deployed yet, with malicious initcode. When users then interact with that address on the new chain, they hit attacker code. Mitigation: use non-trivial, hash-of-something salts, and deploy on all chains before announcing.
  • Initcode mismatch. If the deployed initcode differs from what was simulated/predicted (e.g., different compiler), the address diverges silently.
  • Self-destruct rebirth. Pre-Cancun, you could SELFDESTRUCT and redeploy fresh code at the same address via CREATE2. Post-Cancun, SELFDESTRUCT no longer clears the contract (EIP-6780), eliminating this vector for newly-deployed contracts.

Multi-sig + timelock + guardian configurations

The standard configuration for a privileged action:

  1. Multi-sig proposes the action (N-of-M Safe; typically 3-of-5 to 5-of-9).
  2. Timelock queues the action with a delay (24h - 7d depending on action class).
  3. Guardian can cancel the queued action during the delay if it looks malicious.
  4. Multi-sig executes after the delay passes.

The security properties:

  • No single key compromise = exploit (multi-sig).
  • Any malicious proposal is publicly visible for the timelock window (community can scream).
  • Guardian acts as fast-path veto without requiring full re-vote.
  • Guardian compromise = grief only (can cancel but not execute).

What to ask in the loop: "what's the timelock window for upgrades vs parameter changes; who's on the multi-sig and how are they geo-distributed; what's the guardian's mandate and how is it monitored?"

Incident response runbook

The 6-phase incident model (also covered in 09):

  1. Detection — alert fires (monitor / bounty / community report / direct exploit).
  2. Triage — validate, scope, estimate funds at risk, decide if war room.
  3. War room — coordinate technical response, comms, multi-sig.
  4. Patch — fix in branch, re-review, deploy via timelock or emergency pathway.
  5. Comms — public statements at appropriate cadence, transparent about what's known and unknown.
  6. Postmortem — public writeup within weeks. Internal lessons-learned doc, taxonomy update.

Each phase has its own runbook with named owners. The named owner for "comms" must not be the named owner for "patch" — different cognitive loads.

Disaster drills (chaos engineering for protocols)

A drill is a rehearsal of the incident response process. You should be running at least one per quarter. Format:

  1. Scenario. A specific incident (e.g., "Chainlink ETH/USD reads $0 for 10 blocks"; "a multi-sig signer's machine is compromised"; "a critical bounty report lands on a public weekend").
  2. No-notice or low-notice. Some team members know it's a drill; others learn at trigger time.
  3. Run the SOP. Page, war-room, comms, patch — all the motions, on testnet or against a fork.
  4. Stopwatch. Time to acknowledge, time to first comms, time to mitigation. Track over drills.
  5. Retrospective. What didn't go smoothly? Update the SOP.
Why drills earn their cost

The most expensive moment in DeFi security is the third hour of a real incident where everyone is realizing they've never done this before. Drilling removes that surprise.

Sample drill scenarios

Scenario library
  • Oracle outage: feed stops updating for 2 hours during high volatility.
  • Compromised admin key: a signer reports phishing.
  • Governance attack: malicious proposal slips through during a holiday.
  • Bridge halt: cross-chain messages stop confirming; user funds locked.
  • Public exploit disclosure on Twitter at 2am UTC.
  • Bounty researcher disagreement: researcher disputes severity, threatens to publish.
  • RPC provider outage: all your monitoring goes dark.
  • L2 sequencer halt: your protocol can't process txs.