Deployment & Ops
From "merged to main" to "live on mainnet at the same address on 12 chains" — and the discipline that keeps it boring.
Foundry deployment scripts
Foundry's forge script is the de-facto deployment tool. A script is a Solidity contract that uses cheatcodes to broadcast transactions.
// script/Deploy.s.sol
import {Script} from "forge-std/Script.sol";
import {LendingCore} from "../src/LendingCore.sol";
contract Deploy is Script {
function run() external {
uint256 pk = vm.envUint("DEPLOYER_PK");
address owner = vm.envAddress("OWNER");
vm.startBroadcast(pk);
LendingCore core = new LendingCore(owner);
vm.stopBroadcast();
console2.log("LendingCore deployed at:", address(core));
}
}
# Dry-run (simulation)
forge script script/Deploy.s.sol --rpc-url $RPC
# Live broadcast
forge script script/Deploy.s.sol --rpc-url $RPC --broadcast --verify --etherscan-api-key $KEY
# Multi-chain dry-run
forge script script/Deploy.s.sol --rpc-url $BASE_RPC --sender $DEPLOYER
forge script script/Deploy.s.sol --rpc-url $ARB_RPC --sender $DEPLOYER
Conventions worth following:
- Env-driven config. Never hard-code addresses, keys, chain IDs. Use
vm.envUint,vm.envAddress. - One script per deployment unit. Don't deploy 10 contracts in one script unless they truly belong together.
- Log every deployed address. Foundry also writes a
broadcast/directory with full receipts; check it into a separate ops repo. - Two-step ownership. Deploy with a temporary deployer EOA; transfer ownership to the timelocked multisig in a separate script.
Deterministic deploys — CREATE2 / CREATE3
By default CREATE produces an address that depends on (deployer, nonce). CREATE2 (EIP-1014) computes the address from (deployer, salt, bytecode_hash). CREATE3 is a community pattern that drops the bytecode dependency, giving (deployer, salt) only.
| Opcode | Address formula | Same address across chains? |
|---|---|---|
| CREATE | keccak256(rlp([deployer, nonce])) | No — nonce drifts |
| CREATE2 | keccak256(0xff, deployer, salt, code_hash) | Yes, if deployer + salt + bytecode are identical |
| CREATE3 | keccak256(0xff, deployer, salt, INIT_CODE_HASH_OF_PROXY) | Yes, even with different bytecode |
// Using the canonical Singleton Factory (0x4e59...)
address constant CREATE2_DEPLOYER = 0x4e59b44847b379578588920cA78FbF26c0B4956C;
// Deploying:
bytes32 salt = keccak256("LendingCore.v1");
bytes memory init = abi.encodePacked(type(LendingCore).creationCode, abi.encode(owner));
(bool ok, bytes memory ret) = CREATE2_DEPLOYER.call(abi.encodePacked(salt, init));
require(ok, "CREATE2_FAIL");
address deployed = address(bytes20(ret));
// Predicting address (off-chain or in a verify step):
bytes32 hash = keccak256(abi.encodePacked(bytes1(0xff), CREATE2_DEPLOYER, salt, keccak256(init)));
address predicted = address(uint160(uint256(hash)));
Use CREATE2 when you want the same address across chains and your bytecode is identical. Use CREATE3 when bytecode might differ per chain (e.g., chain-specific immutables) but you want the same address everywhere. Use plain CREATE when neither matters.
Edge cases:
- Constructor arguments must be identical across chains or CREATE2 addresses diverge.
- Compiler version, optimizer settings, and imports all bake into the bytecode hash.
CHAINIDbaked as an immutable changes per chain — use CREATE3, or pass it as a constructor arg that is identical across chains and read it from chain itself in code.
Multi-chain deploys
Deploying to N chains is a project-management exercise as much as a technical one. The senior playbook:
- Bake everything into immutables. Chain-specific data (oracle addresses, sequencer feeds, native wrapper) goes into constructor args, captured at deploy time.
- Use a deterministic deployer. The same address everywhere is a major UX and security win.
- Deploy in a strict order. Dependencies first (oracle adapters), core next, periphery last.
- Verify on each chain. Etherscan + Sourcify + Routescan + whatever the L2's explorer wants.
- Document the canonical address. Maintain a single source of truth (
addresses.jsonin a public repo). - Run an end-to-end smoke test on each chain. Deposit-borrow-repay-withdraw on a tiny amount, validate events match.
{
"ethereum": {
"chainId": 1,
"core": "0x...",
"factory": "0x...",
"deployedAt": 19000000
},
"base": {
"chainId": 8453,
"core": "0x...",
"factory": "0x...",
"deployedAt": 11000000
},
"arbitrum": {
"chainId": 42161,
"core": "0x...",
"factory": "0x...",
"deployedAt": 200000000
}
}
Verification — Etherscan, Sourcify, and friends
# Etherscan (in-flight, via foundry)
forge script script/Deploy.s.sol --rpc-url $RPC --broadcast \
--verify --etherscan-api-key $ETHERSCAN_KEY
# Etherscan after the fact
forge verify-contract $ADDR src/LendingCore.sol:LendingCore \
--chain-id 1 --etherscan-api-key $ETHERSCAN_KEY \
--constructor-args $(cast abi-encode "constructor(address)" $OWNER)
# Sourcify
forge verify-contract $ADDR src/LendingCore.sol:LendingCore \
--chain-id 1 --verifier sourcify
# Blockscout (many L2s use this)
forge verify-contract $ADDR src/LendingCore.sol:LendingCore \
--chain-id 8453 --verifier blockscout \
--verifier-url https://base.blockscout.com/api/
Verification matters for trust and discoverability — and for users, since dapps and wallets read verified ABIs. A protocol with unverified contracts on mainnet signals "we shipped sloppy." Verify everywhere.
Mainnet ops hygiene
- Never sign a tx you haven't simulated. Use Tenderly simulation or
cast estimate / callfirst. - Hardware-backed keys for any privileged role. Ledger / Trezor / Fireblocks. Hot keys are for dev only.
- Multi-sig for everything privileged. Single-sig is malpractice for owner / admin / guardian.
- Timelock for owner-level actions. Even with multisig, delay window for community to react.
- Separate guardian role for emergencies. Lower bar (smaller multisig, faster) but only "pause" power.
- Document every privileged tx. Pre-tx Slack post, post-tx confirmation, tx hash in ops channel.
- Two-person review before any privileged tx is signed.
- Drill the guardian playbook. Quarterly tabletop exercise where someone "calls" the pause.
Dry-run on fork
Every significant on-chain action — deploys, upgrades, migrations, large governance proposals — gets a fork dry-run first.
# Start an anvil fork pinned to current block
anvil --fork-url $MAINNET_RPC --fork-block-number $BLOCK
# Replay your script against the fork
forge script script/Upgrade.s.sol --rpc-url http://localhost:8545 --broadcast \
--private-key $ANVIL_KEY
# Verify the state changes are as expected
cast call $CORE "totalSupplyAssets()(uint256)" --rpc-url http://localhost:8545
Variations:
- Replay against historical state. Pin to a past block and simulate "what would have happened with this fix." Useful for post-mortems.
- Impersonate accounts.
cast rpc anvil_impersonateAccount <multisig>lets you simulate signing without keys. - Time travel.
cast rpc anvil_setNextBlockTimestamp <t>to test time-dependent behavior (timelocks, interest accrual).
Incident response cadence
Roles in a typical lending-protocol incident:
| Role | Responsibility |
|---|---|
| On-call protocol engineer | Triage; coordinate fix |
| Guardian signer(s) | Sign pause tx if needed |
| Owner / DAO multisig signers | Sign upgrade / parameter change post-pause |
| Comms lead | Public communication, post-mortem coordination |
| Auditor liaison | Loop in audit partners for fix review |
Response SLA (typical):
- Alert fires → on-call ack in < 5 min.
- Triage call up in < 15 min.
- Decision to pause / not in < 30 min.
- Initial public statement in < 4 hours (if user-facing).
- Detailed post-mortem within 7 days.
A pre-deploy checklist
Walk this list before every mainnet broadcast
- All tests green on the exact commit being deployed?
- Gas snapshot diffed; no surprising regressions?
- Storage layout diffed (for upgrades); no shifts?
- Audit complete on this commit (not a different one)?
- Fork dry-run succeeded with expected state changes?
- Deployer EOA funded with sufficient ETH + buffer?
- Deterministic address pre-computed and recorded?
- Verification args (constructor encoding) prepared?
- Initial parameters reviewed by risk team?
- Owner / guardian addresses correct and tested?
- Subgraph manifest updated with new addresses?
- Monitoring alerts configured for new addresses?
- Comms post drafted?
- Two engineers on the call during broadcast?