# Base Chain Specification > Base Chain protocol specification, upgrades, and reference documentation. ## V1: Execution Engine ### EVM Changes #### Transaction Gas Limit Cap [EIP-7825](https://eips.ethereum.org/EIPS/eip-7825) introduces a protocol-level maximum gas limit of 16,777,216 (2^24) per transaction. Transactions exceeding this cap are rejected during validation. Base adopts the same cap as L1 to maximize Ethereum equivalence. #### Upper-Bound MODEXP [EIP-7823](https://eips.ethereum.org/EIPS/eip-7823) caps MODEXP precompile inputs to a maximum of 1024 bytes per field. Calls with larger inputs are rejected. #### MODEXP Gas Cost Increase [EIP-7883](https://eips.ethereum.org/EIPS/eip-7883) raises the MODEXP precompile minimum gas cost from 200 to 500 and triples the general cost calculation. #### CLZ Opcode [EIP-7939](https://eips.ethereum.org/EIPS/eip-7939) adds a new `CLZ` opcode that counts the number of leading zero bits in a 256-bit word, returning 256 if the input is zero. #### secp256r1 Precompile Gas Cost [EIP-7951](https://eips.ethereum.org/EIPS/eip-7951) specifies the secp256r1 precompile at address `0x100` with a gas cost of 3,450. Base already has the `p256Verify` precompile at the same address (added in Fjord via [RIP-7212](https://github.com/ethereum/RIPs/blob/master/RIPS/rip-7212.md)) with a gas cost of 3,450. From V1, the gas cost increases to 6,900 to match the L1 gas cost specified in EIP-7951, maintaining strict equivalence with L1 precompile pricing. ### Networking Changes #### eth/69 [EIP-7642](https://eips.ethereum.org/EIPS/eip-7642) updates the Ethereum wire protocol to version 69, removing legacy fields from the `Status` message and simplifying the handshake. #### Remove Account Balances & Receipts The `FlashblocksMetadata` payload transmitted over the Flashblocks WebSocket is simplified in V1. The `new_account_balances` and `receipts` fields are removed. The `access_list` field remains but will not be populated in V1. **Before:** ```json { "block_number": 43403718, "new_account_balances": { "0x4200000000000000000000000000000000000006": "0x35277a9715c6df1c99de" }, "receipts": { "0x1ef9be45b3f7d44de9d98767ddb7c0e330b21777b67a3c79d469be9ffab091dd": { "cumulativeGasUsed": "0x177d7bd", "logs": [], "status": "0x1", "type": "0x2" } }, "access_list": null } ``` **After:** ```json { "block_number": 43403718, "access_list": null } ``` ### RPC Changes #### eth\_config RPC Method [EIP-7910](https://eips.ethereum.org/EIPS/eip-7910) introduces the `eth_config` JSON-RPC method, which returns chain configuration parameters such as fork activation timestamps. ## V1 ### Summary * Add Fusaka Support * Simplify Flashblocks Websocket Format * Enable TEE & ZK Proofs ### Activation Timestamps | Network | Activation timestamp | | --------- | -------------------- | | `mainnet` | TBD | | `sepolia` | TBD | ### Execution Layer * [EIP-7823: Upper-Bound MODEXP](/upgrades/v1/exec-engine#upper-bound-modexp) * [EIP-7825: Transaction Gas Limit Cap](/upgrades/v1/exec-engine#transaction-gas-limit-cap) * [EIP-7883: MODEXP Gas Cost Increase](/upgrades/v1/exec-engine#modexp-gas-cost-increase) * [EIP-7939: CLZ Opcode](/upgrades/v1/exec-engine#clz-opcode) * [EIP-7951: secp256r1 Precompile](/upgrades/v1/exec-engine#secp256r1-precompile-gas-cost) * [EIP-7642: eth/69](/upgrades/v1/exec-engine#eth69) * [EIP-7910: eth\_config RPC Method](/upgrades/v1/exec-engine#eth_config-rpc-method) * [Remove Account Balances & Receipts](/upgrades/v1/exec-engine#remove-account-balances--receipts) ### Proofs * TEE * ZK ## Pectra Blob Schedule Derivation ### If enabled If this hardfork is enabled (i.e. if there is a non nil hardfork activation timestamp set), the following rules apply: When setting the [L1 Attributes Deposited Transaction](../../reference/glossary.md#l1-attributes-deposited-transaction), the adoption of the Pectra blob base fee update fraction (see [EIP-7691](https://github.com/ethereum/EIPs/blob/master/EIPS/eip-7691.md)) occurs for L2 blocks with an L1 origin equal to or greater than the hard fork timestamp. For L2 blocks with an L1 origin less than the hard fork timestamp, the Cancun blob base fee update fraction is used (see [EIP-4844](https://github.com/ethereum/EIPs/blob/master/EIPS/eip-4844.md)). ### If disabled (default) If the hardfork activation timestamp is nil, the blob base fee update rules which are active at any given L1 block will apply to the L1 Attributes Deposited Transaction. ### Motivation and Rationale Due to a consensus layer bug, OPStack chains on Holesky and Sepolia running officially released op-node software did not update their blob base fee update fraction (for L1 Attributes Deposited Transaction) in tandem with the Prague upgrade on L1. These chains, or any OPStack chain with a sequencer running the buggy consensus code[^1] when Holesky/Sepolia activated Pectra, will have an inaccurate blob base fee in the [L1Block](../../protocol/execution/evm/predeploys.md#l1block) contract. This optional fork is a mechanism to bring those chains back in line. It is unnecessary for chains using Ethereum mainnet for L1 and running op-node [v1.12.0](https://github.com/ethereum-optimism/optimism/releases/tag/op-node%2Fv1.12.0) or later before Pectra activates on L1. Activating by L1 origin preserves the invariant that the L1BlockInfo is constant for blocks with the same epoch. [^1]: This is any commit *before* the code was fixed in [aabf3fe054c5979d6a0008f26fe1a73fdf3aad9f](https://github.com/ethereum-optimism/optimism/commit/aabf3fe054c5979d6a0008f26fe1a73fdf3aad9f) ## Pectra Blob Schedule (Sepolia) ### Activation Timestamps | Network | Activation timestamp | | --------- | -------------------------------------- | | `mainnet` | Not activated | | `sepolia` | `1742486400` (2025-03-20 16:00:00 UTC) | The Pectra Blob Schedule hardfork is an optional hardfork which delays the adoption of the Prague blob base fee update fraction until the specified time. Until that time, the Cancun update fraction from the previous fork is retained. Note that the activation logic for this upgrade is different to most other upgrades. Usually, specific behavior is activated at the *hard fork timestamp*, if it is not nil, and continues until overridden by another hardfork. Here, specific behavior is activated for all times up to the hard fork timestamp, if it is not nil, and then *deactivated* at the hard fork timestamp. ### Consensus Layer * [Derivation](/upgrades/pectra-blob-schedule/derivation) ## Derivation ### Activation Block Rules The first block with a timestamp at or after the Jovian activation time is considered the *Jovian activation block*. To not modify or interrupt the system behavior regarding gas computations, the activation block must not include any non-deposit transactions. Sequencer must enforce this by setting `noTxPool` to `true` in the payload attributes. This rule must be checked during derivation at the batch verification stage, and if the batch for the activation block contains any transactions, it must be `DROP`ped. On the Jovian activation block, in addition to the L1 attributes deposit and potentially any user deposits from L1, a set of deposit transaction-based upgrade transactions are deterministically generated by the derivation pipeline in the following order: * L1 Attributes Transaction (still calling the old `L1Block.setL1BlockValuesIsthmus()`) * User deposits from L1 (if any) * Network Upgrade Transactions * L1Block deployment * Update L1Block Proxy ERC-1967 Implementation * GasPriceOracle deployment * Update GasPriceOracle Proxy ERC-1967 Implementation * GasPriceOracle Enable Jovian call The network upgrade transactions are specified in the next section. ### Network Upgrade Transactions The upgrade transaction details below are based on the monorepo at commit hash `b3299e0ddb55442e6496512084d16c439ea2da77`, and will be updated once a contracts release is made. #### L1Block Deployment The `L1Block` contract is deployed. A deposit transaction is derived with the following attributes: * `from`: `0x4210000000000000000000000000000000000006` * `to`: `null` * `mint`: `0` * `value`: `0` * `nonce`: `0` * `gasLimit`: `447315` * `data`: `0x0x608060405234801561001057600080...` (full bytecode) * `sourceHash`: `0x98faf23b9795967bc0b1c543144739d50dba3ea40420e77ad6ca9848dbfb62e8`, computed with the "Upgrade-deposited" type, with `intent = "Jovian: L1Block Deployment"` This results in the Jovian L1Block contract being deployed to `0x3Ba4007f5C922FBb33C454B41ea7a1f11E83df2C`, to verify: ```bash cast compute-address --nonce=0 0x4210000000000000000000000000000000000006 Computed Address: 0x3Ba4007f5C922FBb33C454B41ea7a1f11E83df2C ``` Verify `sourceHash`: ```bash cast keccak $(cast concat-hex 0x0000000000000000000000000000000000000000000000000000000000000002 $(cast keccak "Jovian: L1Block Deployment")) # 0x98faf23b9795967bc0b1c543144739d50dba3ea40420e77ad6ca9848dbfb62e8 ``` Verify `data`: ```bash git checkout 773798a67678ab28c3ef7ee3405f25c04616af19 make build-contracts jq -r ".bytecode.object" packages/contracts-bedrock/forge-artifacts/L1Block.sol/L1Block.json ``` This transaction MUST deploy a contract with the following code hash `0x5f885ca815d2cf27a203123e50b8ae204fdca910b6995d90b2d7700cbb9240d1`. To verify the code hash: ```bash git checkout 773798a67678ab28c3ef7ee3405f25c04616af19 make build-contracts cast k $(jq -r ".deployedBytecode.object" packages/contracts-bedrock/forge-artifacts/L1Block.sol/L1Block.json) ``` #### L1Block Proxy Update This transaction updates the L1Block Proxy ERC-1967 implementation slot to point to the new L1Block deployment. A deposit transaction is derived with the following attributes: * `from`: `0x0000000000000000000000000000000000000000` * `to`: `0x4200000000000000000000000000000000000015` (L1Block Proxy) * `mint`: `0` * `value`: `0` * `gasLimit`: `50,000` * `data`: `0x3659cfe60000000000000000000000003ba4007f5c922fbb33c454b41ea7a1f11e83df2c` * `sourceHash`: `0x08447273a4fbce97bc8c515f97ac74efc461f6a4001553712f31ebc11288bad2` computed with the "Upgrade-deposited" type, with `intent = "Jovian: L1Block Proxy Update"` Verify data: ```bash cast concat-hex $(cast sig "upgradeTo(address)") $(cast abi-encode "upgradeTo(address)" 0x3Ba4007f5C922FBb33C454B41ea7a1f11E83df2C) # 0x3659cfe60000000000000000000000003ba4007f5c922fbb33c454b41ea7a1f11e83df2c ``` Verify `sourceHash`: ```bash cast keccak $(cast concat-hex 0x0000000000000000000000000000000000000000000000000000000000000002 $(cast keccak "Jovian: L1Block Proxy Update")) # 0x08447273a4fbce97bc8c515f97ac74efc461f6a4001553712f31ebc11288bad2 ``` #### GasPriceOracle Deployment The `GasPriceOracle` contract is deployed. A deposit transaction is derived with the following attributes: * `from`: `0x4210000000000000000000000000000000000007` * `to`: `null` * `mint`: `0` * `value`: `0` * `nonce`: `0` * `gasLimit`: `1750714` * `data`: `0x0x608060405234801561001057600080...` (full bytecode) * `sourceHash`: `0xd939cca6eca7bd0ee0c7e89f7e5b5cf7bf6f7afe7b6966bb45dfb95344b31545`, computed with the "Upgrade-deposited" type, with `intent = "Jovian: GasPriceOracle Deployment"` This results in the Jovian GasPriceOracle contract being deployed to `0x4f1db3c6AbD250ba86E0928471A8F7DB3AFd88F1`, to verify: ```bash cast compute-address --nonce=0 0x4210000000000000000000000000000000000007 Computed Address: 0x4f1db3c6AbD250ba86E0928471A8F7DB3AFd88F1 ``` Verify `sourceHash`: ```bash cast keccak $(cast concat-hex 0x0000000000000000000000000000000000000000000000000000000000000002 $(cast keccak "Jovian: GasPriceOracle Deployment")) # 0xd939cca6eca7bd0ee0c7e89f7e5b5cf7bf6f7afe7b6966bb45dfb95344b31545 ``` Verify `data`: ```bash git checkout 773798a67678ab28c3ef7ee3405f25c04616af19 make build-contracts jq -r ".bytecode.object" packages/contracts-bedrock/forge-artifacts/GasPriceOracle.sol/GasPriceOracle.json ``` This transaction MUST deploy a contract with the following code hash `0xe9fc7c96c4db0d6078e3d359d7e8c982c350a513cb2c31121adf5e1e8a446614`. To verify the code hash: ```bash git checkout 773798a67678ab28c3ef7ee3405f25c04616af19 make build-contracts cast k $(jq -r ".deployedBytecode.object" packages/contracts-bedrock/forge-artifacts/GasPriceOracle.sol/GasPriceOracle.json) ``` #### GasPriceOracle Proxy Update This transaction updates the GasPriceOracle Proxy ERC-1967 implementation slot to point to the new GasPriceOracle deployment. A deposit transaction is derived with the following attributes: * `from`: `0x0000000000000000000000000000000000000000` * `to`: `0x420000000000000000000000000000000000000F` (GasPriceOracle Proxy) * `mint`: `0` * `value`: `0` * `gasLimit`: `50,000` * `data`: `0x3659cfe60000000000000000000000004f1db3c6abd250ba86e0928471a8f7db3afd88f1` * `sourceHash`: `0x46b597e2d8346ed7749b46734074361e0b41a0ab9af7afda5bb4e367e072bcb8` computed with the "Upgrade-deposited" type, with `intent = "Jovian: GasPriceOracle Proxy Update"` Verify data: ```bash cast concat-hex $(cast sig "upgradeTo(address)") $(cast abi-encode "upgradeTo(address)" 0x4f1db3c6AbD250ba86E0928471A8F7DB3AFd88F1) # 0x3659cfe60000000000000000000000004f1db3c6abd250ba86e0928471a8f7db3afd88f1 ``` Verify `sourceHash`: ```bash cast keccak $(cast concat-hex 0x0000000000000000000000000000000000000000000000000000000000000002 $(cast keccak "Jovian: GasPriceOracle Proxy Update")) # 0x46b597e2d8346ed7749b46734074361e0b41a0ab9af7afda5bb4e367e072bcb8 ``` #### GasPriceOracle Enable Jovian This transaction informs the GasPriceOracle to start using the Jovian operator fee formula. A deposit transaction is derived with the following attributes: * `from`: `0xDeaDDEaDDeAdDeAdDEAdDEaddeAddEAdDEAd0001` (Depositer Account) * `to`: `0x420000000000000000000000000000000000000F` (Gas Price Oracle Proxy) * `mint`: `0` * `value`: `0` * `gasLimit`: `90,000` * `data`: `0xb3d72079` * `sourceHash`: `0xe836db6a959371756f8941be3e962d000f7e12a32e49e2c9ca42ba177a92716c`,\ computed with the "Upgrade-deposited" type, with `intent = "Jovian: Gas Price Oracle Set Jovian"` Verify data: ```bash cast sig "setJovian()" # 0xb3d72079 ``` Verify `sourceHash`: ```bash cast keccak $(cast concat-hex 0x0000000000000000000000000000000000000000000000000000000000000002 $(cast keccak "Jovian: Gas Price Oracle Set Jovian")) # 0xe836db6a959371756f8941be3e962d000f7e12a32e49e2c9ca42ba177a92716c ``` ## Jovian: Execution Engine ### Minimum Base Fee Jovian introduces a [configurable minimum base fee](https://github.com/ethereum-optimism/design-docs/blob/main/protocol/minimum-base-fee.md) to reduce the duration of priority-fee auctions on Base. The minimum base fee is configured via `SystemConfig` (see [System Configuration](../../protocol/consensus/derivation.md#system-configuration)) and enforced by the execution engine via the block header `extraData` encoding and the Engine API `PayloadAttributesV3` parameters. #### Minimum Base Fee in Block Header Like [Holocene's dynamic EIP-1559 parameters](../holocene/exec-engine.md#dynamic-eip-1559-parameters), Jovian encodes fee parameters in the `extraData` field of each L2 block header. The format is extended to include an additional `u64` field for the minimum base fee in wei. | Name | Type | Byte Offset | | ------------ | ------------------ | ----------- | | `minBaseFee` | `u64 (big-endian)` | `[9, 17)` | Constraints: * `version` MUST be `1` (incremented from Holocene's `0`). * There MUST NOT be any data beyond these 17 bytes. The `minBaseFee` field is an absolute minimum expressed in wei. During base fee computation, if the computed `baseFee` is less than `minBaseFee`, it MUST be clamped to `minBaseFee`. ```javascript if (baseFee < minBaseFee) { baseFee = minBaseFee } ``` Note: `extraData` has a maximum capacity of 32 bytes (to fit the L1 beacon-chain `extraData` type) and may be extended by future upgrades. #### Minimum Base Fee in `PayloadAttributesV3` The Engine API [`PayloadAttributesV3`](../../protocol/execution/index.md#extended-payloadattributesv3) is extended with a new field `minBaseFee`. The existing `eip1559Params` remains 8 bytes (Holocene format). ```text PayloadAttributesV3: { timestamp: QUANTITY prevRandao: DATA (32 bytes) suggestedFeeRecipient: DATA (20 bytes) withdrawals: array of WithdrawalV1 parentBeaconBlockRoot: DATA (32 bytes) transactions: array of DATA noTxPool: bool gasLimit: QUANTITY or null eip1559Params: DATA (8 bytes) or null minBaseFee: QUANTITY or null } ``` The `minBaseFee` MUST be `null` prior to the Jovian fork, and MUST be non-`null` after the Jovian fork. #### Rationale As with [Holocene's dynamic EIP-1559 parameters](../holocene/exec-engine.md#rationale), placing the minimum base fee in the block header allows us to avoid reaching into the state during block sealing. This retains the purity of the function that computes the next block's base fee from its parent block header, while still allowing them to be dynamically configured. Dynamic configuration is handled similarly to `gasLimit`, with the derivation pipeline providing the appropriate `SystemConfig` contract values to the block builder via `PayloadAttributesV3` parameters. ### DA Footprint Block Limit A *DA footprint block limit* is introduced to limit the total amount of estimated compressed transaction data that can fit into a block. For each transaction, a new resource called DA footprint is tracked, next to its gas usage. It is scaled to the gas dimension so that its block total can also be limited by the block gas limit, like a block's total gas usage. Let a block's `daFootprint` be defined as follows: ```python def daFootprint(block: Block) -> int: daFootprint = 0 for tx in block.transactions: if tx.type == DEPOSIT_TX_TYPE: continue daUsageEstimate = max( minTransactionSize, (intercept + fastlzCoef * tx.fastlzSize) // 1e6 ) daFootprint += daUsageEstimate * daFootprintGasScalar return daFootprint ``` where `intercept`, `minTransactionSize`, `fastlzCoef` and `fastlzSize` are defined in the [Fjord specs](../fjord/exec-engine.md), `DEPOSIT_TX_TYPE` is `0x7E`, and `//` represents integer floor division. From Jovian, the `blobGasUsed` property of each block header is set to that block's `daFootprint`. Note that pre-Jovian, since Ecotone, it was set to 0, as Base does not support blobs. It is now repurposed to store the DA footprint. During block building and header validation, it must be guaranteed and checked, respectively, that the block's `daFootprint` stays below the `gasLimit`, just like the `gasUsed` property. Note that this implies that blocks may have no more than `gasLimit/daFootprintGasScalar` total estimated DA usage bytes. Furthermore, from Jovian, the base fee update calculation now uses `gasMetered := max(gasUsed, blobGasUsed)` in place of the `gasUsed` value used before. As a result, blocks with high DA usage may cause the base fee to increase in subsequent blocks. #### Scalar loading The `daFootprintGasScalar` is loaded in a similar way to the `operatorFeeScalar` and `operatorFeeConstant` [included](../isthmus/exec-engine.md#operator-fee) in the Isthmus fork. It can be read in two interchangable ways: * read from the deposited L1 attributes (`daFootprintGasScalar`) of the current L2 block (decoded according to the [jovian schema](l1-attributes.md)) * read from the L1 Block Info contract (`0x4200000000000000000000000000000000000015`) * using the solidity getter function `daFootprintGasScalar` * using a direct storage-read: big-endian `uint16` in slot `8` at offset `12`. It takes on a default value as described in the section on [L1 Attributes](l1-attributes.md). #### Receipts After Jovian activation, a new field `daFootprintGasScalar` is added to transaction receipts that is populated with the DA footprint gas scalar of the transaction's block. Furthermore, the `blobGasUsed` receipt field is set to the DA footprint of the transaction. #### Rationale While the current L1 fee mechanism charges for DA usage based on an estimate of the DA footprint of a transaction, no protocol mechanism currently reflects the limited available *DA throughput on L1*. E.g. on Ethereum L1 with Pectra enabled, the available blob throughput is `~96 kB/s` (with a target of `~64 kB/s`), but the calldata floor gas price of `40` for calldata-heavy L2 transactions allows for more incompressible transaction data to be included on most Base chains than the Ethereum blob space could handle. This is currently mitigated at the policy level by batcher-sequencer throttling: a mechanism which artificially constricts block building. This can cause base fees to fall, which implies unnecessary losses for chain operators and a negative user experience (transaction inclusion delays, priority fee auctions). So hard-limiting a block's DA footprint in a way that also influences the base fee mitigates the aforementioned problems of policy-based solutions. ### Operator Fee #### Fee Formula Update Jovian updates the operator fee calculation so that higher fees may be charged. Starting at the Jovian activation, the operator fee MUST be computed as: $$ \text{operatorFee} = (\text{gas} \times \text{operatorFeeScalar} \times 100) + \text{operatorFeeConstant} $$ The effective per-gas scalar applied is therefore `100 * operatorFeeScalar`. Otherwise, the data types and operator fee semantics described in the [Isthmus spec](../isthmus/exec-engine.md#operator-fee) continue to apply. #### Maximum value With the new formula, the operator fee's maximum value has 103 bits: ```text operatorFee_max = (uint64_max * uint32_max * 100) + uint64_max ≈ 7.924660923989131 * 10^30 ``` Implementations that use `uint256` for intermediate arithmetic do not need additional overflow checks. ### EVM Changes #### Precompile Input Size Restrictions Some precompiles have changes to the input size restrictions. The new input size restrictions are: * `bn256Pairing`: 81,984 bytes (427 pairs) * `BLS12-381 G1 MSM`: 288,960 bytes (1,806 pairs) * `BLS12-381 G2 MSM`: 278,784 bytes (968 pairs) * `BLS12-381 Pairing`: 156,672 bytes (408 pairs) ## L1 Block Attributes ### Overview The L1 block attributes transaction is updated to include the DA footprint gas scalar. | Input arg | Type | Calldata bytes | Segment | | -------------------- | ------- | -------------- | ------- | | {0x3db6be2b} | | 0-3 | n/a | | baseFeeScalar | uint32 | 4-7 | 1 | | blobBaseFeeScalar | uint32 | 8-11 | | | sequenceNumber | uint64 | 12-19 | | | l1BlockTimestamp | uint64 | 20-27 | | | l1BlockNumber | uint64 | 28-35 | | | basefee | uint256 | 36-67 | 2 | | blobBaseFee | uint256 | 68-99 | 3 | | l1BlockHash | bytes32 | 100-131 | 4 | | batcherHash | bytes32 | 132-163 | 5 | | operatorFeeScalar | uint32 | 164-167 | 6 | | operatorFeeConstant | uint64 | 168-175 | | | daFootprintGasScalar | uint16 | 176-177 | | Note that the first input argument, in the same pattern as previous versions of the L1 attributes transaction, is the function selector: the first four bytes of `keccak256("setL1BlockValuesJovian()")`. In the activation block, there are two possibilities: * If Jovian is active at genesis, there are no transactions in the activation block and therefore no L1 Block Attributes transaction to consider. * If Jovian activates after genesis [`setL1BlockValuesIsthmus()`](../isthmus/l1-attributes.md) method must be used. This is because the L1 Block contract will not yet have been upgraded. In each subsequent L2 block, the `setL1BlockValuesJovian()` method must be used. When using this method, the pre-Jovian values are migrated over 1:1 and the transaction also sets `daFootprintGasScalar` to the value from the [`SystemConfig`](../../protocol/consensus/derivation.md#system-configuration). If that value is `0`, then a default of `400` is set. ## Jovian ### Activation Timestamps | Network | Activation timestamp | | --------- | -------------------------------------- | | `mainnet` | `1764691201` (2025-12-02 16:00:01 UTC) | | `sepolia` | `1763568001` (2025-11-19 16:00:01 UTC) | ### Execution Layer * [Minimum Base Fee](/upgrades/jovian/exec-engine#minimum-base-fee) * [DA Footprint Limit](/upgrades/jovian/exec-engine#da-footprint-limit) * [Operator Fee](/upgrades/jovian/exec-engine#operator-fee) ### Consensus Layer * [Network upgrade transactions](/upgrades/jovian/derivation#network-upgrade-transactions) applied during derivation * Auto-upgrading and extension of the [L1 Attributes Predeployed Contract](/upgrades/jovian/l1-attributes) (also known as `L1Block` predeploy) ### Smart Contracts * [System Config](/upgrades/jovian/system-config) ## Jovian: System Config ### Minimum Base Fee Configuration Jovian adds a configuration value to `SystemConfig` to control the minimum base fee used by the EIP-1559 fee market on Base. The value is a minimum base fee in wei. | Name | Type | Default | Meaning | | ------------ | -------- | ------- | ----------------------- | | `minBaseFee` | `uint64` | `0` | Minimum base fee in wei | The configuration is updated via a new method on `SystemConfig`: ```solidity function setMinBaseFee(uint64 minBaseFee) external onlyOwner; ``` #### `ConfigUpdate` When the configuration is updated, a [`ConfigUpdate`](../../protocol/consensus/derivation.md#system-config-updates) event MUST be emitted with the following parameters: | `version` | `updateType` | `data` | Usage | | ------------ | ------------ | --------------------------------- | ----------------------------------- | | `uint256(0)` | `uint8(6)` | `abi.encode(uint64(_minBaseFee))` | Modifies the minimum base fee (wei) | #### Initialization The following actions should happen during the initialization of the `SystemConfig`: * `emit ConfigUpdate.BATCHER` * `emit ConfigUpdate.FEE_SCALARS` * `emit ConfigUpdate.GAS_LIMIT` * `emit ConfigUpdate.UNSAFE_BLOCK_SIGNER` Intentionally absent from this is `emit ConfigUpdate.EIP_1559_PARAMS` and `emit ConfigUpdate.MIN_BASE_FEE`. As long as these values are unset, the default values will be used. Requiring these parameters to be set during initialization would add a strict requirement that the L2 hardforks before the L1 contracts are upgraded, and this is complicated to manage in a world of many chains. #### Modifying Minimum Base Fee Upon update, the contract emits the `ConfigUpdate` event above, enabling nodes to derive the configuration from L1 logs. Implementations MUST incorporate the configured value into the block header `extraData` as specified in `./exec-engine.md`. Until the first such event is emitted, a default value of `0` should be used. #### Interface ##### Minimum Base Fee Parameters ##### `minBaseFee` This function returns the currently configured minimum base fee in wei. ```solidity function minBaseFee() external view returns (uint64); ``` ### DA Footprint Configuration Jovian adds a `uint16` configuration value to `SystemConfig` to control the [`daFootprintGasScalar`](derivation.md). The configuration is updated via a new method on `SystemConfig`: ```solidity function setDAFootprintGasScalar(uint16 daFootprintGasScalar) external onlyOwner; ``` #### `ConfigUpdate` When the configuration is updated, a [`ConfigUpdate`](../../protocol/consensus/derivation.md#system-config-updates) event MUST be emitted with the following parameters: | `version` | `updateType` | `data` | Usage | | ------------ | ------------ | ------------------------------------------- | ------------------------------------ | | `uint256(0)` | `uint8(7)` | `abi.encode(uint16(_daFootprintGasScalar))` | Modifies the DA footprint gas scalar | #### Modifying DA Footprint Gas Scalar Upon update, the contract emits the `ConfigUpdate` event above, enabling nodes to derive the configuration from L1 logs. #### Interface ##### DA Footprint Gas Scalar Parameters ##### `daFootprintGasScalar` This function returns the currently configured DA footprint gas scalar. ```solidity function daFootprintGasScalar() external view returns (uint16); ``` ## Isthmus L2 Chain Derivation Changes ## Network upgrade automation transactions The Isthmus hardfork activation block contains the following transactions, in this order: * L1 Attributes Transaction * User deposits from L1 * Network Upgrade Transactions * L1Block deployment * GasPriceOracle deployment * Operator Fee vault deployment * Update L1Block Proxy ERC-1967 Implementation * Update GasPriceOracle Proxy ERC-1967 Implementation * Update Operator Fee vault Proxy ERC-1967 Implementation * GasPriceOracle Enable Isthmus * EIP-2935 Contract Deployment To not modify or interrupt the system behavior around gas computation, this block will not include any sequenced transactions by setting `noTxPool: true`. ### L1Block deployment The `L1Block` contract is upgraded to support the Isthmus operator fee feature. A deposit transaction is derived with the following attributes: * `from`: `0x4210000000000000000000000000000000000003` * `to`: `null` * `mint`: `0` * `value`: `0` * `gasLimit`: `425,000` * `data`: `0x60806040523480156100105...` (full bytecode) * `sourceHash`: `0x3b2d0821ca2411ad5cd3595804d1213d15737188ae4cbd58aa19c821a6c211bf`, computed with the "Upgrade-deposited" type, with \`intent = "Isthmus: L1 Block Deployment" This results in the Isthmus L1Block contract being deployed to `0xFf256497D61dcd71a9e9Ff43967C13fdE1F72D12`, to verify: ```bash cast compute-address --nonce=0 0x4210000000000000000000000000000000000003 Computed Address: 0xFf256497D61dcd71a9e9Ff43967C13fdE1F72D12 ``` Verify `sourceHash`: ```bash cast keccak $(cast concat-hex 0x0000000000000000000000000000000000000000000000000000000000000002 $(cast keccak "Isthmus: L1 Block Deployment")) # 0x3b2d0821ca2411ad5cd3595804d1213d15737188ae4cbd58aa19c821a6c211bf ``` Verify `data`: ```bash git checkout 9436dba8c4c906e36675f5922e57d1b55582889e make build-contracts jq -r ".bytecode.object" packages/contracts-bedrock/forge-artifacts/L1Block.sol/L1Block.json ``` This transaction MUST deploy a contract with the following code hash `0x8e3fe7a416d3e5f3b7be74ddd4e7e58e516fa3f80b67c6d930e3cd7297da4a4b`. To verify the code hash: ```bash git checkout 9436dba8c4c906e36675f5922e57d1b55582889e make build-contracts cast k $(jq -r ".deployedBytecode.object" packages/contracts-bedrock/forge-artifacts/L1Block.sol/L1Block.json) ``` ### GasPriceOracle deployment The `GasPriceOracle` contract is also upgraded to support the Isthmus operator fee feature. A deposit transaction is derived with the following attributes: * `from`: `0x4210000000000000000000000000000000000004` * `to`: `null` * `mint`: `0` * `value`: `0` * `gasLimit`: `1,625,000` * `data`: `0x60806040523480156100105...` (full bytecode) * `sourceHash`: `0xfc70b48424763fa3fab9844253b4f8d508f91eb1f7cb11a247c9baec0afb8035`, computed with the "Upgrade-deposited" type, with \`intent = "Isthmus: Gas Price Oracle Deployment" This results in the Isthmus GasPriceOracle contract being deployed to `0x93e57A196454CB919193fa9946f14943cf733845`, to verify: ```bash cast compute-address --nonce=0 0x4210000000000000000000000000000000000003 Computed Address: 0xFf256497D61dcd71a9e9Ff43967C13fdE1F72D12 ``` Verify `sourceHash`: ```bash cast keccak $(cast concat-hex 0x0000000000000000000000000000000000000000000000000000000000000002 $(cast keccak "Isthmus: Gas Price Oracle Deployment")) # 0xfc70b48424763fa3fab9844253b4f8d508f91eb1f7cb11a247c9baec0afb8035 ``` Verify `data`: ```bash git checkout 9436dba8c4c906e36675f5922e57d1b55582889e make build-contracts jq -r ".bytecode.object" packages/contracts-bedrock/forge-artifacts/GasPriceOracle.sol/GasPriceOracle.json ``` This transaction MUST deploy a contract with the following code hash `0x4d195a9d7caf9fb6d4beaf80de252c626c853afd5868c4f4f8d19c9d301c2679`. To verify the code hash: ```bash git checkout 9436dba8c4c906e36675f5922e57d1b55582889e make build-contracts cast k $(jq -r ".deployedBytecode.object" packages/contracts-bedrock/forge-artifacts/GasPriceOracle.sol/GasPriceOracle.json) ``` ### Operator fee vault deployment A new `OperatorFeeVault` contract has been created to receive the operator fees. The contract is created with the following arguments: * Recipient address: The base fee vault * Min withdrawal amount: 0 * Withdrawal network: L2 A deposit transaction is derived with the following attributes: * `from`: `0x4210000000000000000000000000000000000005` * `to`: `null` * `mint`: `0` * `value`: `0` * `gasLimit`: `500,000` * `data`: `0x60806040523480156100105...` (full bytecode) * `sourceHash`: `0x107a570d3db75e6110817eb024f09f3172657e920634111ce9875d08a16daa96`, computed with the "Upgrade-deposited" type, with \`intent = "Isthmus: Operator Fee Vault Deployment" This results in the Isthmus OperatorFeeVault contract being deployed to `0x4fa2Be8cd41504037F1838BcE3bCC93bC68Ff537`, to verify: ```bash cast compute-address --nonce=0 0x4210000000000000000000000000000000000003 Computed Address: 0x4fa2Be8cd41504037F1838BcE3bCC93bC68Ff537 ``` Verify `sourceHash`: ```bash cast keccak $(cast concat-hex 0x0000000000000000000000000000000000000000000000000000000000000002 $(cast keccak "Isthmus: Operator Fee Vault Deployment")) # 0x107a570d3db75e6110817eb024f09f3172657e920634111ce9875d08a16daa96 ``` Verify `data`: ```bash git checkout 9436dba8c4c906e36675f5922e57d1b55582889e make build-contracts jq -r ".bytecode.object" packages/contracts-bedrock/forge-artifacts/OperatorFeeVault.sol/OperatorFeeVault.json ``` This transaction MUST deploy a contract with the following code hash `0x57dc55c9c09ca456fa728f253fe7b895d3e6aae0706104935fe87c7721001971`. To verify the code hash: ```bash git checkout 9436dba8c4c906e36675f5922e57d1b55582889e make build-contracts export ETH_RPC_URL=https://mainnet.optimism.io # Any RPC running Cancun or Prague cast k $(cast call --create $(jq -r ".bytecode.object" packages/contracts-bedrock/forge-artifacts/OperatorFeeVault.sol/OperatorFeeVault.json)) ``` Note that this verification differs from the other deployments because the `OperatorFeeVault` inherits the `FeeVault` contract which contains immutables. So the deployment bytecode has to be executed on an EVM to get the actual deployed contract bytecode. But it sets all immutables to fixed constants, so the resulting code hash is constant. ### L1Block Proxy Update This transaction updates the L1Block Proxy ERC-1967 implementation slot to point to the new L1Block deployment. A deposit transaction is derived with the following attributes: * `from`: `0x0000000000000000000000000000000000000000` * `to`: `0x4200000000000000000000000000000000000015` (L1Block Proxy) * `mint`: `0` * `value`: `0` * `gasLimit`: `50,000` * `data`: `0x3659cfe6000000000000000000000000ff256497d61dcd71a9e9ff43967c13fde1f72d12` * `sourceHash`: `0xebe8b5cb10ca47e0d8bda8f5355f2d66711a54ddeb0ef1d30e29418c9bf17a0e` computed with the "Upgrade-deposited" type, with \`intent = "Isthmus: L1 Block Proxy Update" Verify data: ```bash cast concat-hex $(cast sig "upgradeTo(address)") $(cast abi-encode "upgradeTo(address)" 0xff256497d61dcd71a9e9ff43967c13fde1f72d12) 0x3659cfe6000000000000000000000000ff256497d61dcd71a9e9ff43967c13fde1f72d12 ``` Verify `sourceHash`: ```bash cast keccak $(cast concat-hex 0x0000000000000000000000000000000000000000000000000000000000000002 $(cast keccak "Isthmus: L1 Block Proxy Update")) # 0xebe8b5cb10ca47e0d8bda8f5355f2d66711a54ddeb0ef1d30e29418c9bf17a0e ``` ### GasPriceOracle Proxy Update This transaction updates the GasPriceOracle Proxy ERC-1967 implementation slot to point to the new GasPriceOracle deployment. A deposit transaction is derived with the following attributes: * `from`: `0x0000000000000000000000000000000000000000` * `to`: `0x420000000000000000000000000000000000000F` (Gas Price Oracle Proxy) * `mint`: `0` * `value`: `0` * `gasLimit`: `50,000` * `data`: `0x3659cfe600000000000000000000000093e57a196454cb919193fa9946f14943cf733845` * `sourceHash`: `0xecf2d9161d26c54eda6b7bfdd9142719b1e1199a6e5641468d1bf705bc531ab0` computed with the "Upgrade-deposited" type, with `intent = "Isthmus: Gas Price Oracle Proxy Update"` Verify data: ```bash cast concat-hex $(cast sig "upgradeTo(address)") $(cast abi-encode "upgradeTo(address)" 0x93e57a196454cb919193fa9946f14943cf733845) 0x3659cfe600000000000000000000000093e57a196454cb919193fa9946f14943cf733845 ``` Verify `sourceHash`: ```bash cast keccak $(cast concat-hex 0x0000000000000000000000000000000000000000000000000000000000000002 $(cast keccak "Isthmus: Gas Price Oracle Proxy Update")) # 0xecf2d9161d26c54eda6b7bfdd9142719b1e1199a6e5641468d1bf705bc531ab0 ``` ### OperatorFeeVault Proxy Update This transaction updates the GasPriceOracle Proxy ERC-1967 implementation slot to point to the new GasPriceOracle deployment. A deposit transaction is derived with the following attributes: * `from`: `0x0000000000000000000000000000000000000000` * `to`: `0x420000000000000000000000000000000000001B` (Operator Fee Vault Proxy) * `mint`: `0` * `value`: `0` * `gasLimit`: `50,000` * `data`: `0x3659cfe60000000000000000000000004fa2be8cd41504037f1838bce3bcc93bc68ff537` * `sourceHash`: `0xad74e1adb877ccbe176b8fa1cc559388a16e090ddbe8b512f5b37d07d887a927` computed with the "Upgrade-deposited" type, with `intent = "Isthmus: Operator Fee Vault Proxy Update"` Verify data: ```bash cast concat-hex $(cast sig "upgradeTo(address)") $(cast abi-encode "upgradeTo(address)" 0x4fa2be8cd41504037f1838bce3bcc93bc68ff537) 0x3659cfe60000000000000000000000004fa2be8cd41504037f1838bce3bcc93bc68ff537 ``` Verify `sourceHash`: ```bash cast keccak $(cast concat-hex 0x0000000000000000000000000000000000000000000000000000000000000002 $(cast keccak "Isthmus: Operator Fee Vault Proxy Update")) # 0xad74e1adb877ccbe176b8fa1cc559388a16e090ddbe8b512f5b37d07d887a927 ``` ### GasPriceOracle Enable Isthmus This transaction informs the GasPriceOracle to start using the Isthmus gas calculation formula. A deposit transaction is derived with the following attributes: * `from`: `0xDeaDDEaDDeAdDeAdDEAdDEaddeAddEAdDEAd0001` (Depositer Account) * `to`: `0x420000000000000000000000000000000000000F` (Gas Price Oracle Proxy) * `mint`: `0` * `value`: `0` * `gasLimit`: `90,000` * `data`: `0x291b0383` * `sourceHash`: `0x3ddf4b1302548dd92939826e970f260ba36167f4c25f18390a5e8b194b295319`, computed with the "Upgrade-deposited" type, with \`intent = "Isthmus: Gas Price Oracle Set Isthmus" Verify data: ```bash cast sig "setIsthmus()" 0x8e98b106 ``` Verify `sourceHash`: ```bash cast keccak $(cast concat-hex 0x0000000000000000000000000000000000000000000000000000000000000002 $(cast keccak "Isthmus: Gas Price Oracle Set Isthmus")) # 0x3ddf4b1302548dd92939826e970f260ba36167f4c25f18390a5e8b194b295319 ``` ### EIP-2935 Contract Deployment [EIP-2935](https://eips.ethereum.org/EIPS/eip-2935) requires a contract to be deployed. To deploy this contract, a deposit transaction is created with attributes matching the EIP: * `from`: `0x3462413Af4609098e1E27A490f554f260213D685` * `to`: `null` * `mint`: `0` * `value`: `0` * `gasLimit`: `250,000` * `data`: `0x60538060095f395ff33373fffffffffffffffffffffffffffffffffffffffe14604657602036036042575f35600143038111604257611fff81430311604257611fff9006545f5260205ff35b5f5ffd5b5f35611fff60014303065500` * `sourceHash`: `0xbfb734dae514c5974ddf803e54c1bc43d5cdb4a48ae27e1d9b875a5a150b553a` computed with the "Upgrade-deposited" type, with \`intent = "Isthmus: EIP-2935 Contract Deployment" This results in the EIP-2935 contract being deployed to `0x0000F90827F1C53a10cb7A02335B175320002935`, to verify: ```bash cast compute-address --nonce=0 0x3462413Af4609098e1E27A490f554f260213D685 Computed Address: 0x0000F90827F1C53a10cb7A02335B175320002935 ``` Verify `sourceHash`: ```bash cast keccak $(cast concat-hex 0x0000000000000000000000000000000000000000000000000000000000000002 $(cast keccak "Isthmus: EIP-2935 Contract Deployment")) # 0xbfb734dae514c5974ddf803e54c1bc43d5cdb4a48ae27e1d9b875a5a150b553a ``` This transaction MUST deploy a contract with the following code hash `0x6e49e66782037c0555897870e29fa5e552daf4719552131a0abce779daec0a5d`. ## Span Batch Updates [Span batches](../delta/span-batches.md) are a span of consecutive L2 blocks than are batched submitted. Span batches contain the L1 transactions and transaction types that are posted containing the span of L2 blocks. Since [EIP-7702] introduces a new transaction type, the Span Batch must be updated to support the [EIP-7702] transaction. This corresponds with a new RLP-encoding of the `tx_datas` list as specified in [the Delta span batch spec](../delta/span-batches.md), adding a new transaction type: Transaction type `4` ([EIP-7702] `SetCode`): `0x04 ++ rlp_encode(value, max_priority_fee_per_gas, max_fee_per_gas, data, access_list, authorization_list)` The [EIP-7702] transaction extends [EIP-1559] to include a new `authorization_list` field. `authorization_list` is an RLP-encoded list of authorization tuples. The [EIP-7702] transaction format is as follows. * `value`: The transaction value as a `u256`. * `max_priority_fee_per_gas`: The maximum priority fee per gas allowed as a `u256`. * `max_fee_per_gas`: The maximum fee per gas as a `u256`. * `data`: The transaction data bytes. * `access_list`: The [EIP-2930] access list. * `authorization_list`: The [EIP-7702] signed authorization list. ### Activation Singular batches with transactions of type `4` must only be accepted if Isthmus is active at the timestamp of the batch. If a singular batch contains a transaction of type `4` before Isthmus is active, this batch must be *dropped*. Note that if Holocene is active, this will also lead to the remaining span batch, and channel that contained it, to get dropped. Also note that this check must happen at the level of individual batches that are derived from span batches, not to span batches as a whole. In particular, it is allowed for a span batch to span the Isthmus activation timestamp and contain SetCode transactions in singular batches that have a timestamp at or after the Isthmus activation time, even if the timestamp of the span batch is before the Isthmus activation time. [EIP-1559]: https://eips.ethereum.org/EIPS/eip-1559 [EIP-7702]: https://eips.ethereum.org/EIPS/eip-7702 [EIP-2930]: https://eips.ethereum.org/EIPS/eip-2930 ## L2 Execution Engine [l2-to-l1-mp]: ../../protocol/execution/evm/predeploys.md#L2ToL1MessagePasser [output-root]: ../../reference/glossary.md#l2-output-root ### Overview The storage root of the `L2ToL1MessagePasser` is included in the block header's `withdrawalRoot` field. ### Timestamp Activation Isthmus, like other network upgrades, is activated at a timestamp. Changes to the L2 Block execution rules are applied when the `L2 Timestamp >= activation time`. ### `L2ToL1MessagePasser` Storage Root in Header After Isthmus hardfork's activation, the L2 block header's `withdrawalsRoot` field will consist of the 32-byte [`L2ToL1MessagePasser`][l2-to-l1-mp] account storage root from the world state identified by the stateRoot field in the block header. The storage root should be the same root that is returned by `eth_getProof` at the given block number. #### Header Validity Rules Prior to isthmus activation: * the L2 block header's `withdrawalsRoot` field must be: * `nil` if Canyon has not been activated. * `keccak256(rlp(empty_string_code))` if Canyon has been activated. * the L2 block header's `requestsHash` field must be omitted. After Isthmus activation, an L2 block header is valid iff: 1. The `withdrawalsRoot` field 1. Is 32 bytes in length. 2. Matches the [`L2ToL1MessagePasser`][l2-to-l1-mp] account storage root, as committed to in the `storageRoot` within the block header 2. The `requestsHash` field is equal to `sha256('') = 0xe3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855` indicating no requests in the block. #### Header Withdrawals Root | Byte offset | Description | | ----------- | --------------------------------------------------------- | | `[0, 32)` | [`L2ToL1MessagePasser`][l2-to-l1-mp] account storage root | ##### Rationale Currently, to generate [L2 output roots][output-root] for historical blocks, an archival node is required. This directly places a burden on users of the system in a post-fault-proofs world, where: 1. A proposer must have an archive node to propose an output root at the safe head. 2. A user that is proving their withdrawal must have an archive node to verify that the output root they are proving their withdrawal against is indeed valid and included within the safe chain. Placing the [`L2ToL1MessagePasser`][l2-to-l1-mp] account storage root in the `withdrawalsRoot` field alleviates this burden for users and protocol participants alike, allowing them to propose and verify other proposals with lower operating costs. ##### Genesis Block If Isthmus is active at genesis block, the `withdrawalsRoot` in the genesis block header is set to the [`L2ToL1MessagePasser`][l2-to-l1-mp] account storage root. ##### State Processing At the time of state processing, the header for which transactions are being validated should not make it's `withdrawalsRoot` available to the EVM/application layer. ##### P2P During sync, we expect the withdrawals list in the block body to be empty (OP stack does not make use of the withdrawals list) and hence the hash of the withdrawals list to be the MPT root of an empty list. When verifying the header chain using the final header that is synced, the header timestamp is used to determine whether Isthmus is active at the said block. If it is, we expect that the header `withdrawalsRoot` MPT hash can be any non-null value (since it is expected to contain the `L2ToL1MessagePasser`'s storage root). ##### Backwards Compatibility Considerations Beginning at Canyon (which includes Shanghai hardfork support) and prior to Isthmus activation, the `withdrawalsRoot` field is set to the MPT root of an empty withdrawals list. This is the same root as an empty storage root. The withdrawals are captured in the L2 state, however they are not reflected in the `withdrawalsRoot`. Hence, prior to Isthmus activation, even if a `withdrawalsRoot` is present and a MPT root is present in the header, it should not be used. Any implementation that calculates output root should be careful not to use the header `withdrawalsRoot`. Note that there is always nonzero storage in the [`L2ToL1MessagePasser`][l2-to-l1-mp], because it is a [proxied predeploy](../../protocol/execution/evm/predeploys.md) -- from genesis it stores an implementation address and owner address. So from Isthmus, the `withdrawalsRoot` will always be non-nil and never be the MPT root of an empty list. ##### Forwards Compatibility Considerations As it stands, the `withdrawalsRoot` field is unused within the Base's header consensus format, and will never be used for other reasons that are currently planned. Setting this value to the account storage root of the withdrawal directly fits with Base, and makes use of the existing field in the L1 header consensus format. ##### Client Implementation Considerations Various EL clients store historical state of accounts differently. If, as a contrived case, Base did not have an outbound withdrawal for a long period of time, the node may not have access to the account storage root of the [`L2ToL1MessagePasser`][l2-to-l1-mp]. In this case, the client would be unable to keep consensus. However, most modern clients are able to at the very least reconstruct the account storage root at a given block on the fly if it does not directly store this information. ##### Transaction Simulation In response to RPC methods like `eth_simulateV1` that allow simulation of arbitrary transactions within one or more blocks, an empty withdrawals root should be included in the header of a block that consists of such simulated transactions. The same is applicable for scenarios where the actual withdrawals root value is not readily available. ### Deposit Requests [EIP-6110] shifts deposit to the execution layer, introducing a new [EIP-7685] deposit request of type `DEPOSIT_REQUEST_TYPE`. Deposit requests then appear in the [EIP-7685] requests list. The Base needs to ignore these requests. Requests generation must be modified to exclude [EIP-6110] deposit requests. Note that since the [EIP-6110] request type did *not* exist prior to Pectra on L1 and the Isthmus hardfork on L2, no activation time is needed since these deposit type requests may always be excluded. [EIP-6110]: https://eips.ethereum.org/EIPS/eip-6110 [EIP-7685]: https://eips.ethereum.org/EIPS/eip-7685 ### Block Body Withdrawals List Withdrawals list in the block body is encoded as an empty RLP list. ### EVM Changes #### BLS Precompiles Similar to the `bn256Pairing` precompile in the [granite hardfork](../granite/exec-engine.md), [EIP-2537](https://eips.ethereum.org/EIPS/eip-2537) introduces a BLS precompile that short-circuits depending on input size in the EVM. The input size limits of the BLS precompile contracts are listed below: * G1 multiple-scalar-multiply: `input_size <= 513760 bytes` * G2 multiple-scalar-multiply: `input_size <= 488448 bytes` * Pairing check: `input_size <= 235008 bytes` The rest of the BLS precompiles are fixed-size operations which have a fixed gas cost. All of the BLS precompiles should be [accelerated](../../protocol/fault-proof/index.md#precompile-accelerators) in fault proof programs so they call out to the L1 instead of calculating the result inside the program. ### Block Sealing In the Base, `EIP-7685` is no-op'd, and the `requestsHash` is always set to `sha256('')` (as noted in [header validity rules](#header-validity-rules)). As such, [EIP-6110](https://eips.ethereum.org/EIPS/eip-6110), [EIP-7002](https://eips.ethereum.org/EIPS/eip-7002), and [EIP-7251](https://eips.ethereum.org/EIPS/eip-7251) are not enabled either. The Base execution layer must ensure that the post-block filtering of events in the deposit contract (EIP-6110) as well as the `EIP-7002` + `EIP-7251` system calls are *not invoked* during the block sealing process after Isthmus activation. Users of Base may still permissionlessly deploy these smart contracts, but they will not be treated as special by the Base execution layer, and the system calls introduced in L1's Pectra hardfork are not considered. ### Engine API Updates #### Update to `ExecutionPayload` `ExecutionPayload` will contain an extra field for `withdrawalsRoot` after Isthmus hard fork. #### `engine_newPayloadV4` API Post Isthmus, `engine_newPayloadV4` will be used. The `executionRequests` parameter MUST be an empty array. ### Fees New OP stack variants have different resource consumption patterns, and thus require a more flexible pricing model. To enable more customizable fee structures, Isthmus adds a new component to the fee calculation: the `operatorFee`, which is parameterized by two scalars: the `operatorFeeScalar` and the `operatorFeeConstant`. #### Operator Fee The operator fee is integrated directly into the EVM, alongside the standard gas fee and the Base specific L1 data fee. This fee follows the same semantics of existing fees charged in the EVM[^1], just with a new fee beneficiary account. ##### Fee Formula $$ \text{operatorFee} = (\text{gas} \times \text{operatorFeeScalar} \div 10^6) + \text{operatorFeeConstant} $$ Where: * `gas` is the amount of gas that the transaction used. When calculating the amount of gas that is bought at the beginning of the transaction, this should be the `gas_limit`. When determining how much gas should be refunded, based off of how much of the `gas_limit` the transaction used, this should be the `gas_used`. * `operatorFeeScalar` is a `uint32` scalar set by the chain operator, scaled by `1e6`. * `operatorFeeConstant` is a `uint64` scalar set by the chain operator. Note that the operator fee's maximum value has 77 bits, which can be calculated from the maximum input parameters: ```text operatorFee_max = (uint64_max * uint32_max / 10^6) + uint64_max ≈ 7.924660923989131 * 10^22 ``` So implementations don't need to check for overflows if they perform the calculations with `uint256` types. ##### Deposit Operator Fees Deposit transactions do not get charged operator fees. For all deposit transactions, regardless of the operator fee parameter configuration, the operator fee should be **zero**. Deposit transactions also do not receive operator fee gas refunds, since they never buy the operator fee gas to begin with. ##### EVM Fee Semantics Like other fees in the EVM, the operator fee should be charged following the pattern below: 1. During pre-execution validation, the account must have enough ETH to cover the existing worst-case gas + L1 data fees *as well as* the worst-case operator fee (for deposits, the worst-case fee is `0`). To compute this value, use the [fee formula](#fee-formula) with `gas` set to the `gas_limit` of the transaction, and add it to the existing worst-case transaction fee. 2. When buying gas prior to execution, charge the account the worst-case operator fee. To compute this value, use the [fee formula](#fee-formula) with `gas` set to the `gas_limit` of the transaction. 3. After execution, when issuing refunds, transactions that bought operator fee gas should be refunded the operator fee gas that was unused (i.e., the caller should only be charged the *effective* operator fee.) The refund should be calculated as $\text{opFeeRefund} = \text{opFeeWorstCase} - \text{opFeeActual}$, where: * $\text{opFeeWorstCase}$ is as described in #1 + #2. * $\text{opFeeActual}$ is the amount of the operator fee that was actually used. This value is computed using the [fee formula](#fee-formula) with `gas` set to the `gas_limit - gas_used + refunded_gas`. `refunded_gas` is as described in [EIP-3529](https://eips.ethereum.org/EIPS/eip-3529). 4. After execution, when rewarding the fee beneficiaries, send the *spent operator fee* to the [operator fee vault](#fee-vaults). This value is exactly $\text{opFeeActual}$ as described above. Implementations must ensure ETH is neither minted nor destroyed as a result of the operator fee. ##### Transaction Pool Changes To account for the additional fee factored into transaction validity mentioned above, the transaction pool must reject transactions that do not have enough balance to cover the worst-case cost of the transaction fee. This worst-case cost of a transaction now includes the worst-case operator fee. ##### Configuring Operator Fee Parameters `operatorFeeScalar` and `operatorFeeConstant` are loaded in a similar way to the `baseFeeScalar` and `blobBaseFeeScalar` used in the [`L1Fee`](../../protocol/execution/index.md#ecotone-l1-cost-fee-changes-eip-4844-da). calculation. In more detail, these parameters can be accessed in two interchangable ways. * read from the deposited L1 attributes (`operatorFeeScalar` and `operatorFeeConstant`) of the current L2 block * read from the L1 Block Info contract (`0x4200000000000000000000000000000000000015`) * using the respective solidity getter functions (`operatorFeeScalar`, `operatorFeeConstant`) * using direct storage-reads: * Operator fee scalar as big-endian `uint32` in slot `8` at offset `0`. * Operator fee constant as big-endian `uint64` in slot `8` at offset `4`. #### Fee Vaults These collected fees are sent to a new vault for the `operatorFee`: the [`OperatorFeeVault`](predeploys.md#operatorfeevault). Like the existing vaults, this is a hardcoded address, pointing at a pre-deployed proxy contract. The proxy is backed by a vault contract deployment, based on `FeeVault`, to route vault funds to L1 securely. #### Receipts After Isthmus activation, 2 new fields `operatorFeeScalar` and `operatorFeeConstant` are added to transaction receipts if and only if at least one of them is non zero. [^1]: Wood, G., & Ethereum Contributors. (n.d.-a). Ethereum Yellow Paper. [https://ethereum.github.io/yellowpaper/paper.pdf](https://ethereum.github.io/yellowpaper/paper.pdf) Page 8, section 5: "Gas and Payment" ## L1 Block Attributes ### Overview The L1 block attributes transaction is updated to include the operator fee parameters. | Input arg | Type | Calldata bytes | Segment | | ------------------- | ------- | -------------- | ------- | | {0x098999be} | | 0-3 | n/a | | baseFeeScalar | uint32 | 4-7 | 1 | | blobBaseFeeScalar | uint32 | 8-11 | | | sequenceNumber | uint64 | 12-19 | | | l1BlockTimestamp | uint64 | 20-27 | | | l1BlockNumber | uint64 | 28-35 | | | basefee | uint256 | 36-67 | 2 | | blobBaseFee | uint256 | 68-99 | 3 | | l1BlockHash | bytes32 | 100-131 | 4 | | batcherHash | bytes32 | 132-163 | 5 | | operatorFeeScalar | uint32 | 164-167 | 6 | | operatorFeeConstant | uint64 | 168-175 | | Note that the first input argument, in the same pattern as previous versions of the L1 attributes transaction, is the function selector: the first four bytes of `keccak256("setL1BlockValuesIsthmus()")`. In the activation block, there are two possibilities: * If Isthmus is active at genesis, there are no transactions in the activation block and therefore no L1 Block Attributes transaction to consider. * If Isthmus activates after genesis [`setL1BlockValuesEcotone()`](../ecotone/l1-attributes.md) method must be used. This is because the L1 Block contract will not yet have been upgraded. In each subsequent L2 block, the `setL1BlockValuesIsthmus()` method must be used. When using this method, the pre-Isthmus values are migrated over 1:1 and the transaction also sets the following new attributes to the values from the [`SystemConfig`](../../protocol/consensus/derivation.md#system-configuration): * `operatorFeeScalar` * `operatorFeeConstant` ## Isthmus ### Activation Timestamps | Network | Activation timestamp | | --------- | -------------------------------------- | | `mainnet` | `1746806401` (2025-05-09 16:00:01 UTC) | | `sepolia` | `1744905600` (2025-04-17 16:00:00 UTC) | ### Execution Layer * [Pectra](https://eips.ethereum.org/EIPS/eip-7600) (Execution Layer): * [EIP-7702](https://eips.ethereum.org/EIPS/eip-7702) * [Span Batch Updates](/upgrades/isthmus/derivation#span-batch-updates) * [EIP-2537](https://eips.ethereum.org/EIPS/eip-2537) * [EIP-2935](https://eips.ethereum.org/EIPS/eip-2935) * [EIP-2935 Contract Deployment](/upgrades/isthmus/derivation#eip-2935-contract-deployment) * [EIP-7002](https://eips.ethereum.org/EIPS/eip-7002) * The EIP-7002 predeploy contract and syscall are not adopted as part of Base. * [EIP-7251](https://eips.ethereum.org/EIPS/eip-7251) * The EIP-7251 predeploy contract and syscall are not adopted as part of Base. * [EIP-7623](https://eips.ethereum.org/EIPS/eip-7623) * [EIP-6110](https://eips.ethereum.org/EIPS/eip-6110) * [EIP-7685](https://eips.ethereum.org/EIPS/eip-7685) * [L2ToL1MessagePasser Storage Root in Header](/upgrades/isthmus/exec-engine#l2tol1messagepasser-storage-root-in-header) * [Operator Fee](/upgrades/isthmus/exec-engine#operator-fee) ### Consensus Layer * [Isthmus Derivation](/upgrades/isthmus/derivation) ### Smart Contracts * [Predeploys](/upgrades/isthmus/predeploys) * [L1 Block Attributes](/upgrades/isthmus/l1-attributes) * [System Config](/upgrades/isthmus/system-config) ## Predeploys ### Overview #### L1Block ##### Interface ##### `setIsthmus` This function is meant to be called once on the activation block of the Isthmus network upgrade. It MUST only be callable by the `DEPOSITOR_ACCOUNT` once. When it is called, it MUST call call each getter for the network specific config and set the returndata into storage. #### GasPriceOracle Following the Isthmus upgrade, a new method is introduced: `getOperatorFee(uint256)`. This method returns the operator fee for the given `gasUsed`. The operator fee calculation follows the formula outlined in the [Operator Fee](exec-engine.md#) section of the execution engine spec. The value returned by `getOperatorFee(uint256)` is capped at `U256` max value. #### OperatorFeeVault This vault implements `FeeVault`, like `BaseFeeVault`, `SequencerFeeVault`, and `L1FeeVault`. No special logic is needed in order to insert or withdraw funds. Its address will be `0x420000000000000000000000000000000000001b`. See also [Fee Vaults](exec-engine.md#fee-vaults). ### Security Considerations ## Isthmus: System Config ### Operator Fee Parameter Configuration Isthmus adds configuration variables `operatorFeeScalar` (`uint32`) and `operatorFeeConstant` (`uint64`) to `SystemConfig` to control the operator fee parameters. #### `ConfigUpdate` The following `ConfigUpdate` event is defined where the `CONFIG_VERSION` is `uint256(0)`: | Name | Value | Definition | Usage | | --------------------- | ---------- | --------------------------------------------------------------------------------- | -------------------------------------------------------------------- | | `BATCHER` | `uint8(0)` | `abi.encode(address)` | Modifies the account that is authorized to progress the safe chain | | `FEE_SCALARS` | `uint8(1)` | `(uint256(0x01) << 248) \| (uint256(_blobbasefeeScalar) << 32) \| _basefeeScalar` | Modifies the fee scalars | | `GAS_LIMIT` | `uint8(2)` | `abi.encode(uint64 _gasLimit)` | Modifies the L2 gas limit | | `UNSAFE_BLOCK_SIGNER` | `uint8(3)` | `abi.encode(address)` | Modifies the account that is authorized to progress the unsafe chain | | `EIP_1559_PARAMS` | `uint8(4)` | `uint256(uint64(uint32(_denominator))) << 32 \| uint64(uint32(_elasticity))` | Modifies the EIP-1559 denominator and elasticity | | `OPERATOR_FEE_PARAMS` | `uint8(5)` | `uint256(_operatorFeeScalar) << 64 \| _operatorFeeConstant` | Modifies the operator fee scalar and constant | #### Initialization The following actions should happen during the initialization of the `SystemConfig`: * `emit ConfigUpdate.BATCHER` * `emit ConfigUpdate.FEE_SCALARS` * `emit ConfigUpdate.GAS_LIMIT` * `emit ConfigUpdate.UNSAFE_BLOCK_SIGNER` * `emit ConfigUpdate.EIP_1559_PARAMS` These actions MAY only be triggered if there is a diff to the value. The `operatorFeeScalar` and `operatorFeeConstant` are initialized to 0. #### Modifying Operator Fee Parameters A new `SystemConfig` `UpdateType` is introduced that enables the modification of the `operatorFeeScalar` and `operatorFeeConstant` by the `SystemConfig` owner. #### Interface ##### Operator fee parameters ##### `operatorFeeScalar` This function returns the currently configured operator fee scalar. ```solidity function operatorFeeScalar()(uint32) ``` ##### `operatorFeeConstant` This function returns the currently configured operator fee constant. ```solidity function operatorFeeConstant()(uint64) ``` ##### `setOperatorFeeScalars` This function sets the `operatorFeeScalar` and `operatorFeeConstant`. This function MUST only be callable by the `SystemConfig` owner. ```solidity function setOperatorFeeScalar(uint32 _operatorFeeScalar, uint64 _operatorFeeConstant) ``` ## Holocene L2 Chain Derivation Changes ## Holocene Derivation ### Summary The Holocene hardfork introduces several changes to block derivation rules that render the derivation pipeline mostly stricter and simpler, improve worst-case scenarios for Fault Proofs and Interop. The changes are: * *Strict Batch Ordering* required batches within and across channels to be strictly ordered. * *Partial Span Batch Validity* determines the validity of singular batches from a span batch individually, only invalidating the remaining span batch upon the first invalid singular batch. * *Fast Channel Invalidation*, similarly to Partial Span Batch Validity applied to the channel layer, forward-invalidates a channel upon finding an invalid batch. * *Steady Block Derivation* derives invalid payload attributes immediately as deposit-only blocks. The combined effect of these changes is that the impact of an invalid batch is contained to the block number at hand, instead of propagating forwards or backwards in the safe chain, while also containing invalid payloads at the engine stage to the engine, not propagating backwards in the derivation pipeline. Holocene derivation comprises the following changes to the derivation pipeline to achieve the above. ### Frame Queue The frame queue retains its function and queues all frames of the last batcher transaction(s) that weren't assembled into a channel yet. Holocene still allows multiple frames per batcher transaction, possibly from different channels. As before, this allows for optionally filling up the remaining space of a batcher transaction with a starting frame of the next channel. However, Strict Batch Ordering leads to the following additional checks and rules to the frame queue: * If a *non-first frame* (i.e., a frame with index >0) decoded from a batcher transaction is *out of order*, it is **immediately dropped**, where the frame is called *out of order* if * its frame number is not the previous frame's plus one, if it has the same channel ID, or * the previous frame already closed the channel with the same ID, or * the non-first frame has a different channel ID than the previous frame in the frame queue. * If a *first frame* is decoded while the previous frame isn't a *last frame* (i.e., `is_last` is `false`), all previous frames for the same channel are dropped and this new first frame remains in the queue. These rules guarantee that the frame queue always holds frames whose indices are ordered, contiguous and include the first frame, per channel. Plus, a first frame of a channel is either the first frame in the queue, or is preceded by a closing frame of a previous channel. Note that these rules are in contrast to pre-Holocene rules, where out of order frames were buffered. Pre-Holocene, frame validity checks were only done at the Channel Bank stage. Performing these checks already at the Frame Queue stage leads to faster discarding of invalid frames, keeping the memory consumption of any implementation leaner. ### Channel Bank Because channel frames have to arrive in order, the Channel Bank becomes much simpler and only holds at most a single channel at a time. #### Pruning Pruning is vastly simplified as there is at most only one open channel in the channel bank. So the channel bank's queue becomes effectively a staging slot for a single channel, the *staging channel*. The `MAX_CHANNEL_BANK_SIZE` parameter is no longer used, and the compressed size of the staging channel is required to be at most `MAX_RLP_BYTES_PER_CHANNEL` (else the channel is dropped). Note this latter rule is both a distinct condition and distinct effect, compared to the existing rule that the *uncompressed* size of any given channel is *clipped* to `MAX_RLP_BYTES_PER_CHANNEL` [during decompression](../../protocol/consensus/derivation.md#channel-format). #### Timeout The timeout is applied as before, just only to the single staging channel. #### Reading & Frame Loading The frame queue is guaranteed to hold ordered and contiguous frames, per channel. So reading and frame loading becomes simpler in the channel bank: * A first frame for a new channel starts a new channel as the staging channel. * If there already is an open, non-completed staging channel, it is dropped and replaced by this new channel. This is consistent with how the frame queue drops all frames of a non-closed channel upon the arrival of a first frame for a new channel. * If the current channel is timed-out, but not yet pruned, and the incoming frame would be the next correct frame for this channel, the frame and channel are dropped, including all future frames for the channel that might still be in the frame queue. Note that the equivalent rule was already present pre-Holocene. * After adding a frame to the staging channel, the channel is dropped if its raw compressed size as defined in the Bedrock specification is larger than `MAX_RLP_BYTES_PER_CHANNEL`. This rule replaces the total limit of all channels' combined sizes by `MAX_CHANNEL_BANK_SIZE` before Holocene. ### Span Batches Partial Span Batch Validity changes the atomic validity model of [Span Batches](../delta/span-batches.md). In Holocene, a span batch is treated as an optional stage in the derivation pipeline that sits before the batch queue, so that the batch queue pulls singular batches from this previous Span Batch stage. When encountering an invalid singular batch, it is dropped, as is the remaining span batch for consistency reasons. We call this *forwards-invalidation*. However, we don't *backwards-invalidate* previous valid batches that came from the same span batch, as pre-Holocene. When a batch derived from the current staging channel is a singular batch, it is directly forwarded to the batch queue. Otherwise, it is set as the current span batch in the span batch stage. The following span batch validity checks are done, before singular batches are derived from it. Definitions are borrowed from the [original Span Batch specs](../delta/span-batches.md). * If the span batch *L1 origin check* is not part of the canonical L1 chain, the span batch is invalid. * A failed parent check invalidates the span batch. * If `span_start.timestamp > next_timestamp`, the span batch is invalid, because we disallow gaps due to the new strict batch ordering rules. * If `span_end.timestamp < next_timestamp`, the span batch is set to have `past` validity, as it doesn't contain any new batches (this would also happen if applying timestamp checks to each derived singular batch individually). See below in the [Batch Queue](#batch-queue) section about the new `past` validity. * Note that we still allow span batches to overlap with the safe chain (`span_start.timestamp < next_timestamp`). If any of the above checks invalidate the span batch, it is `drop`ped and the remaining channel from which the span batch was derived, is also immediately dropped (see also [Fast Channel Invalidation](#fast-channel-invalidation)). However, a `past` span batch is only dropped, without dropping the remaining channel. > \[!Note] > A word regarding overlapping span batches: the existing batch queue rules already contain the rule > to drop batches whose L1 origin is older than that of the L2 safe head. The Delta span batch > checks also have an equivalent rule that applies to all singular batches past the safe head. > Now full span batch checks aren't done any more in Holocene, but the batch queue rules are still > applied to singular batches that are streamed out of span batches, so in particular this rule also > still applies to the first singular batch past the current safe head coming from an overlapping > span batch. > > It is a known footgun for implementations that the earliest point at which violations of this rule > are detected is when the full array of singular batches is extracted from the span batch and their > L1 origin hashes are populated. It is therefore important to treat singular batches with outdated > or otherwise invalid L1 origin numbers as invalid, and consequently the span batch as invalid, and > not generate a critical derivation error that stalls derivation. ### Batch Queue The batch queue is also simplified in that batches are required to arrive strictly ordered, and any batches that violate the ordering requirements are immediately dropped, instead of buffered. So the following changes are made to the [Bedrock Batch Queue](../../protocol/consensus/derivation.md#batch-queue): * The reordering step is removed, so that later checks will drop batches that are not sequential. * The `future` batch validity status is removed, and batches that were determined to be in the future are now directly `drop`-ped. This effectively disallows gaps, instead of buffering future batches. * A new batch validity `past` is introduced. A batch has `past` validity if its timestamp is before or equal to the safe head's timestamp. This also applies to span batches. * The other rules stay the same, including empty batch generation when the sequencing window elapses. Note that these changes to batch validity rules also activate by the L1 inclusion block timestamp of a batch, not with the batch timestamp. This is important to guarantee consistent validation rules for the first channel after Holocene activation. The `drop` and `past` batch validities cause the following new behavior: * If a batch is found to be invalid and is dropped, the remaining span batch it originated from, if applicable, is also discarded. * If a batch is found to be from the `past`, it is silently dropped and the remaining span batch continues to be processed. This applies to both, span and singular batches. Note that when the L1 origin of the batch queue moves forward, it is guaranteed that it is empty, because future batches aren't buffered any more. Furthermore, because future batches are directly dropped, the batch queue effectively becomes a simpler *batch stage* that holds at most one span batch from which singular batches are read from, and doesn't buffer singular batches itself in a queue any more. A valid batch is directly forwarded to the next stage. #### Fast Channel Invalidation Furthermore, upon finding an invalid batch, the remaining channel it got derived from is also discarded. ### Engine Queue If the engine returns an `INVALID` status for a regularly derived payload, the payload is replaced by a payload with the same fields, except for the `transaction_list`, which is trimmed to include only its deposit transactions. As before, a failure to then process the deposit-only attributes is a critical error. If an invalid payload is replaced by a deposit-only payload, for consistency reasons, the remaining span batch, if applicable, and channel it originated from are dropped as well. ### Attributes Builder Starting after the fork activation block, the `PayloadAttributes` produced by the attributes builder will include the `eip1559Params` field described in the [execution engine specs](exec-engine.md#eip1559params-encoding). This value exists within the `SystemConfig`. On the fork activation block, the attributes builder will include a 0'd out `eip1559Params`, as to instruct the engine to use the [canyon base fee parameter constants](../../protocol/execution/index.md#1559-parameters). This is to prime the pipeline's view of the `SystemConfig` with the default EIP-1559 parameter values. After the first Holocene payload has been processed, future payloads should use the `SystemConfig`'s EIP-1559 denominator and elasticity parameter as the `eip1559Params` field's value. When the pipeline encounters a `UpdateType.EIP_1559_PARAMS`, `ConfigUpdate` event, the pipeline's system config will be synchronized with the `SystemConfig` contract's. ### Activation The new batch rules activate when the *L1 inclusion block timestamp* is greater or equal to the Holocene activation timestamp. Note that this is in contrast to how span batches activated in [Delta](../delta/overview.md), namely via the span batch L1 origin timestamp. When the L1 traversal stage of the derivation pipeline moves its origin to the L1 block whose timestamp is the first to be greater or equal to the Holocene activation timestamp, the derivation pipeline's state is mostly reset by **discarding** * all frames in the frame queue, * channels in the channel bank, and * all batches in the batch queue. The three stages are then replaced by the new Holocene frame queue, channel bank and batch queue (and, depending on the implementation, the optional span batch stage is added). Note that batcher implementations must be aware of this activation behavior, so any frames of a partially submitted channel that were included pre-Holocene must be sent again. This is a very unlikely scenario since production batchers are usually configured to submit a channel in a single transaction. ## Rationale ### Strict Frame and Batch Ordering Strict Frame and Batch Ordering simplifies implementations of the derivation pipeline, and leads to better worst-case cached data usage. * The frame queue only ever holds frames from a single batcher transaction. * The channel bank only ever holds a single staging channel, that is either being built up by incoming frames, or is is being processed by later stages. * The batch queue only ever holds at most a single span batch (that is being processed) and a single singular batch (from the span batch, or the staging channel directly) * The sync start greatly simplifies in the average production case. This has advantages for Fault Proof program implementations. ### Partial Span Batch Validity Partial Span Batch Validity guarantees that a valid singular batch derived from a span batch can immediately be processed as valid and advance the safe chain, instead of being in an undecided state until the full span batch is converted into singular batches. This leads to swifter derivation and gives strong worst-case guarantees for Fault Proofs because the validity of a block doesn't depend on the validity of any future blocks any more. Note that before Holocene, to verify the first block of a span batch required validating the full span batch. ### Fast Channel Invalidation The new Fast Channel Invalidation rule is a consistency implication of the Strict Ordering Rules. Because batches inside channels must be ordered and contiguous, assuming that all batches inside a channel are self-consistent (i.e., parent L2 hashes point to the block resulting from the previous batch), an invalid batch also forward-invalidates all remaining batches of the same channel. ### Steady Block Derivation Steady Block Derivation changes the derivation rules for invalid payload attributes, replacing an invalid payload by a deposit-only/empty payload. Crucially, this means that the effect of an invalid payload doesn't propagate backwards in the derivation pipeline. This has benefits for Fault Proofs and Interop, because it guarantees that batch validity is not influenced by future stages and the block derived from a valid batch will be determined by the engine stage before it pulls new payload attributes from the previous stage. This avoids larger derivation pipeline resets. ### Less Defensive Protocol The stricter derivation rules lead to a less defensive protocol. The old protocol rules allowed for second chances for invalid payloads and submitting frames and batches within channels out of order. Experiences from running Base for over one and a half years have shown that these relaxed derivation rules are (almost) never needed, so stricter rules that improve worst-case scenarios for Fault Proofs and Interop are favorable. Furthermore, the more relaxed rules created a lot more corner cases and complex interactions, which made it harder to reason about and test the protocol, increasing the risk of chain splits between different implementations. ## Security and Implementation Considerations ### Reorgs Before Steady Block Derivation, invalid payloads got second chances to be replaced by valid future payloads. Because they will now be immediately replaced by as deposit-only payloads, there is a theoretical heightened risk for unsafe chain reorgs. To the best of our knowledge, we haven't experienced this on Base yet. The only conceivable scenarios in which a *valid* batch leads to an *invalid* payload are * a buggy or malicious sequencer+batcher * in the future, that an previously valid Interop dependency referenced in that payload is later invalidated, while the block that contained the Interop dependency got already batched. It is this latter case that inspired the Steady Block Derivation rule. It guarantees that the secondary effects of an invalid Interop dependency are contained to a single block only, which avoids a cascade of cross-L2 Interop reorgs that revisit L2 chains more than once. ### Batcher Hardening In a sense, Holocene shifts some complexity from derivation to the batching phase. Simpler and stricter derivation rules need to be met by a more complex batcher implementation. The batcher must be hardened to guarantee the strict ordering requirements. They are already mostly met in practice by the current Go implementation, but more by accident than by design. There are edge cases in which the batcher might violate the strict ordering rules. For example, if a channel fails to submit within a set period, the blocks are requeued and some out of order batching might occur. A batcher implementation also needs to take extra care that dynamic blobs/calldata switching doesn't lead to out of order or gaps of batches in scenarios where blocks are requeued, while future channels are already waiting in the mempool for inclusion. Batcher implementations are suggested to follow a fixed nonce to block-range assignment, once the first batcher transaction (which is almost always the only batcher transaction for a channel for current production batcher configurations) starts being submitted. This should avoid out-of-order or gaps of batches. It might require to implement some form of persistence in the transaction management, since it isn't possible to reliably recover all globally pending batcher transactions in the L1 network. Furthermore, batcher implementations need to be made aware of the Steady Block Derivation rules, namely that invalid payloads will be derived as deposit-only blocks. So in case of an unsafe reorg, the batcher should wait on the sequencer until it has derived all blocks from L1 in order to only start batching new blocks on top of the possibly deposit-only derived reorg'd chain segment. The sync-status should repeatedly be queried and matched against the expected safe chain. In case of any discrepancy, the batcher should then stop batching and wait for the sequencer to fully derive up until the latest L1 batcher transactions, and only then continue batching. ### Sync Start Thanks to the new strict frame and batch ordering rules, the sync start algorithm can be simplified in the average case. The rules guarantee that * an incoming first frame for a new channel leads to discarding previous incomplete frames for a non-closed previous channel in the frame queue and channel bank, and * when the derivation pipeline L1 origin progresses, the batch queue is empty. So the sync start algorithm can optimistically select the last L2 unsafe, safe and finalized heads from the engine and if the L2 safe head's L1 origin is *plausible* (see the [original sync start description](../../protocol/consensus/derivation.md#finding-the-sync-starting-point) for details), start deriving from this L1 origin. * If the first frame we find is a *first frame* for a channel that includes the safe head (TBD: or even just the following L2 block with the current safe head as parent), we can safely continue derivation from this channel because no previous derivation pipeline state could have influenced the L2 safe head. * If the first frame we find is a non-first frame, then we need to walk back a full channel timeout window to see if we find the start of that channel. * If we find the starting frame, we can continue derivation from it. * If we don't find the starting frame, we need to go back a full channel timeout window before the finalized L2 head's L1 origin. Note regarding the last case that if we don't find a starting frame within a channel timeout window, the channel we did find a frame from must be timed out and would be discarded. The safe block we're looking for can't be in any channel that timed out before its L1 origin so we wouldn't need to search any further back, so we go back a channel timeout before the finalized L2 head. ## L2 Execution Engine ### Overview The EIP-1559 parameters are encoded in the block header's `extraData` field and can be configured dynamically through the `SystemConfig`. ### Timestamp Activation Holocene, like other network upgrades, is activated at a timestamp. Changes to the L2 Block execution rules are applied when the `L2 Timestamp >= activation time`. ### Dynamic EIP-1559 Parameters #### EIP-1559 Parameters in Block Header With the Holocene upgrade, the `extraData` header field of each block must have the following format: | Name | Type | Byte Offset | | ------------- | ------------------ | ----------- | | `version` | `u8` | `[0, 1)` | | `denominator` | `u32 (big-endian)` | `[1, 5)` | | `elasticity` | `u32 (big-endian)` | `[5, 9)` | Additionally, * `version` must be `0`, * `denominator` and `elasticity` must be non-zero, * there is no additional data beyond these 9 bytes. Note that `extraData` has a maximum capacity of 32 bytes (to fit in the L1 beacon-chain `extraData` data-type) and its format may be modified/extended by future upgrades. Note also that if the chain had Holocene genesis, the genesis block must have an above-formatted `extraData` representing the initial parameters to be used by the chain. #### EIP-1559 Parameters in `PayloadAttributesV3` The [`PayloadAttributesV3`](https://github.com/ethereum/execution-apis/blob/cea7eeb642052f4c2e03449dc48296def4aafc24/src/engine/cancun.md#payloadattributesv3) type is extended with an additional value, `eip1559Params`: ```rs PayloadAttributesV3: { timestamp: QUANTITY prevRandao: DATA (32 bytes) suggestedFeeRecipient: DATA (20 bytes) withdrawals: array of WithdrawalV1 parentBeaconBlockRoot: DATA (32 bytes) transactions: array of DATA noTxPool: bool gasLimit: QUANTITY or null eip1559Params: DATA (8 bytes) or null } ``` ##### Encoding At and after Holocene activation, `eip1559Parameters` in `PayloadAttributeV3` must be exactly 8 bytes with the following format: | Name | Type | Byte Offset | | ------------- | ------------------ | ----------- | | `denominator` | `u32 (big-endian)` | `[0, 4)` | | `elasticity` | `u32 (big-endian)` | `[4, 8)` | ##### PayloadID computation If `eip1559Params != null`, the `eip1559Params` is included in the `PayloadID` hasher directly after the `gasLimit` field. #### Execution ##### Payload Attributes Processing Prior to Holocene activation, `eip1559Parameters` in `PayloadAttributesV3` must be null and is otherwise considered invalid. At and after Holocene activation, any `ExecutionPayload` corresponding to some `PayloadAttributesV3` must contain `extraData` formatted as the [header value](#eip-1559-parameters-in-block-header). The `denominator` and `elasticity` values within this `extraData` must correspond to those in `eip1559Parameters`, unless both are 0. When both are 0, the [prior EIP-1559 constants](../../protocol/execution/index.md#1559-parameters) must be used to populate `extraData` instead. ##### Base Fee Computation Prior to the Holocene upgrade, the EIP-1559 denominator and elasticity parameters used to compute the block base fee were [constants](../../protocol/execution/index.md#1559-parameters). With the Holocene upgrade, these parameters are instead determined as follows: * if Holocene is not active in `parent_header.timestamp`, the [prior EIP-1559 constants](../../protocol/execution/index.md#1559-parameters) are used. Note that `parent_header.extraData` is empty prior to Holocene, except possibly for the genesis block. * if Holocene is active at `parent_header.timestamp`, then the parameters from `parent_header.extraData` are used. #### Rationale Placing the EIP-1559 parameters within the L2 block header allows us to retain the purity of the function that computes the next block's base fee from its parent block header, while still allowing them to be dynamically configured. Dynamic configuration is handled similarly to `gasLimit`, with the derivation pipeline providing the appropriate `SystemConfig` contract values to the block builder via `PayloadAttributesV3` parameters. ## Holocene ### Activation Timestamps | Network | Activation timestamp | | --------- | -------------------------------------- | | `mainnet` | `1736445601` (2025-01-09 18:00:01 UTC) | | `sepolia` | `1732633200` (2024-11-26 15:00:00 UTC) | ### Execution Layer * [Dynamic EIP-1559 Parameters](/upgrades/holocene/exec-engine#dynamic-eip-1559-parameters) ### Consensus Layer * [Holocene Derivation](/upgrades/holocene/derivation#holocene-derivation) ### Smart Contracts * [System Config](/upgrades/holocene/system-config) ## System Config ### Overview The `SystemConfig` is updated to allow for dynamic EIP-1559 parameters. #### `ConfigUpdate` When the configuration is updated, a [`ConfigUpdate`](../../protocol/consensus/derivation.md#system-config-updates) event MUST be emitted with the following parameters: | `version` | `updateType` | `data` | Usage | | ------------ | ------------ | ---------------------------------------------------------- | ------------------------------------------------ | | `uint256(0)` | `uint8(4)` | `abi.encode((uint256(_denominator) << 32) \| _elasticity)` | Modifies the EIP-1559 denominator and elasticity | Note that the above encoding is the format emitted by the SystemConfig event, which differs from the format in extraData from the block header. #### Initialization The following actions should happen during the initialization of the `SystemConfig`: * `emit ConfigUpdate.BATCHER` * `emit ConfigUpdate.FEE_SCALARS` * `emit ConfigUpdate.GAS_LIMIT` * `emit ConfigUpdate.UNSAFE_BLOCK_SIGNER` Intentionally absent from this is `emit ConfigUpdate.EIP_1559_PARAMS`. As long as these values are unset, the default values will be used. Requiring 1559 parameters to be set during initialization would add a strict requirement that the L2 hardforks before the L1 contracts are upgraded, and this is complicated to manage in a world of many chains. #### Modifying EIP-1559 Parameters A new `SystemConfig` `UpdateType` is introduced that enables the modification of [EIP-1559](https://eips.ethereum.org/EIPS/eip-1559) parameters. This allows for the chain operator to modify the `BASE_FEE_MAX_CHANGE_DENOMINATOR` and the `ELASTICITY_MULTIPLIER`. #### Interface ##### EIP-1559 Params ##### `setEIP1559Params` This function MUST only be callable by the chain governor. ```solidity function setEIP1559Params(uint32 _denominator, uint32 _elasticity) ``` The `_denominator` and `_elasticity` MUST be set to values greater to than 0. It is possible for the chain operator to set EIP-1559 parameters that result in poor user experience. ##### `eip1559Elasticity` This function returns the currently configured EIP-1559 elasticity. ```solidity function eip1559Elasticity()(uint32) ``` ##### `eip1559Denominator` This function returns the currently configured EIP-1559 denominator. ```solidity function eip1559Denominator()(uint32) ``` ## Granite L2 Chain Derivation Changes ### Protocol Parameter Changes The following table gives an overview of the changes in parameters. | Parameter | Pre-Granite (default) value | Granite value | Notes | | ----------------- | --------------------------- | ------------- | ----------------------------- | | `CHANNEL_TIMEOUT` | 300 | 50 | Protocol Constant is reduced. | ### Reduce Channel Timeout With Granite, the `CHANNEL_TIMEOUT` is reduced from 300 to 50 L1 Blocks. The new rule activation timestamp is based on the blocktime of the L1 block that the channel frame is included. ## L2 Execution Engine ### EVM Changes #### `bn256Pairing` precompile input restriction The `bn256Pairing` precompile execution has additional validation on its input. The precompile reverts if its input is larger than `112687` bytes. This is the input size that consumes approximately 20 M gas given the latest `bn256Pairing` gas schedule on L2. ## Granite ### Activation Timestamps | Network | Activation timestamp | | --------- | -------------------------------------- | | `mainnet` | `1726070401` (2024-09-11 16:00:01 UTC) | | `sepolia` | `1723478400` (2024-08-12 16:00:00 UTC) | ### Execution Layer * [Limit `bn256Pairing` precompile input size](/upgrades/granite/exec-engine#bn256pairing-precompile-input-restriction) ### Consensus Layer * [Reduce Channel Timeout to 50](/upgrades/granite/derivation#reduce-channel-timeout) ## Fjord L2 Chain Derivation Changes ## Protocol Parameter Changes The following table gives an overview of the changes in parameters. | Parameter | Pre-Fjord (default) value | Fjord value | Notes | | --------------------------- | ------------------------- | ------------- | --------------------------------------------------------------- | | `max_sequencer_drift` | 600 | 1800 | Was a protocol parameter since Bedrock. Now becomes a constant. | | `MAX_RLP_BYTES_PER_CHANNEL` | 10,000,000 | 100,000,000 | Protocol Constant is increasing. | | `MAX_CHANNEL_BANK_SIZE` | 100,000,000 | 1,000,000,000 | Protocol Constant is increasing. | ### Timestamp Activation Fjord, like other network upgrades, is activated at a timestamp. Changes to the L2 Block execution rules are applied when the `L2 Timestamp >= activation time`. Changes to derivation are applied when it is considering data from a L1 Block whose timestamp is greater than or equal to the activation timestamp. The change of the `max_sequencer_drift` parameter activates with the L1 origin block timestamp. If Fjord is not activated at genesis, it must be activated at least one block after the Ecotone activation block. This ensures that the network upgrade transactions don't conflict. ### Constant Maximum Sequencer Drift With Fjord, the `max_sequencer_drift` parameter becomes a constant of value `1800` *seconds*, translating to a fixed maximum sequencer drift of 30 minutes. Before Fjord, this was a chain parameter that was set once at chain creation, with a default value of `600` seconds, i.e., 10 minutes. Most chains use this value currently. #### Rationale Discussions amongst chain operators came to the unilateral conclusion that a larger value than the current default would be easier to work with. If a sequencer's L1 connection breaks, this drift value determines how long it can still produce blocks without violating the timestamp drift derivation rules. It was furthermore agreed that configurability after this increase is not important. So it is being made a constant. An alternative idea that is being considered for a future hardfork is to make this an L1-configurable protocol parameter via the `SystemConfig` update mechanism. #### Security Considerations The rules around the activation time are deliberately being kept simple, so no other logic needs to be applied other than to change the parameter to a constant. The first Fjord block would in theory accept older L1-origin timestamps than its predecessor. However, since the L1 origin timestamp must also increase, the only noteworthy scenario that can happen is that the first few Fjord blocks will be in the same epoch as the last pre-Fjord blocks, even if these blocks would not be allowed to have these L1-origin timestamps according to pre-Fjord rules. So the same L1 timestamp would be shared within a pre- and post-Fjord mixed epoch. This is considered a feature and is not considered a security issue. ### Increasing `MAX_RLP_BYTES_PER_CHANNEL` and `MAX_CHANNEL_BANK_SIZE` With Fjord, `MAX_RLP_BYTES_PER_CHANNEL` will be increased from 10,000,000 bytes to 100,000,000 bytes, and `MAX_CHANNEL_BANK_SIZE` will be increased from 100,000,000 bytes to 1,000,000,000 bytes. The usage of `MAX_RLP_BYTES_PER_CHANNEL` is defined in [Channel Format](../../protocol/consensus/derivation.md#channel-format). The usage of `MAX_CHANNEL_BANK_SIZE` is defined in [Channel Bank Pruning](../../protocol/consensus/derivation.md#pruning). Span Batches previously had a limit `MAX_SPAN_BATCH_SIZE` which was equal to `MAX_RLP_BYTES_PER_CHANNEL`. Fjord creates a new constant `MAX_SPAN_BATCH_ELEMENT_COUNT` for the element count limit & removes `MAX_SPAN_BATCH_SIZE`. The size of the channel is still checked with `MAX_RLP_BYTES_PER_CHANNEL`. The new value will be used when the timestamp of the L1 origin of the derivation pipeline >= the Fjord activation timestamp. #### Rationale A block with a gas limit of 30 Million gas has a maximum theoretical size of 7.5 Megabytes by being filled up with transactions have only zeroes. Currently, a byte with the value `0` consumes 4 gas. If the block gas limit is raised above 40 Million gas, it is possible to create a block that is large than `MAX_RLP_BYTES_PER_CHANNEL`. L2 blocks cannot be split across channels which means that a block that is larger than `MAX_RLP_BYTES_PER_CHANNEL` cannot be batch submitted. By raising this limit to 100,000,000 bytes, we can batch submit blocks with a gas limit of up to 400 Million Gas. In addition, we are able to improve compression ratios by increasing the amount of data that can be inserted into a single channel. With 33% compression ratio over 6 blobs, we are currently submitting 2.2 MB of compressed data & 0.77 MB of uncompressed data per channel. This will allow use to use up to approximately 275 blobs per channel. Raising `MAX_CHANNEL_BANK_SIZE` is helpful to ensure that we are able to process these larger channels. We retain the same ratio of 10 between `MAX_RLP_BYTES_PER_CHANNEL` and `MAX_CHANNEL_BANK_SIZE`. #### Security Considerations Raising the these limits increases the amount of resources a rollup node would require. Specifically nodes may have to allocate large chunks of memory for a channel and will have to potentially allocate more memory to the channel bank. `MAX_RLP_BYTES_PER_CHANNEL` was originally added to avoid zip bomb attacks. The system is still exposed to these attacks, but these limits are straightforward to handle in a node. The Fault Proof environment is more constrained than a typical node and increasing these limits will require more resources than are currently required. The change in `MAX_CHANNEL_BANK_SIZE` is not relevant to the first implementation of Fault Proofs because this limit only tells the node when to start pruning & once memory is allocated in the FPVM, it is not garbage collected. This means that increasing `MAX_CHANNEL_BANK_SIZE` does not increase the maximum resource usage of the FPP. Increasing `MAX_RLP_BYTES_PER_CHANNEL` could cause more resource usage in FPVM; however, we consider this increase reasonable because this increase is in the amount of data handled at once rather than the total amount of data handled in the program. Instead of using a single channel, the batcher could submit 10 channels prior to this change which would cause the Fault Proof Program to consume a very similar amount of resources. ## Brotli Channel Compression [legacy-channel-format]: ../../protocol/consensus/derivation.md#channel-format Fjord introduces a new versioned channel encoding format to support alternate compression algorithms, with the [legacy channel format][legacy-channel-format] remaining supported. The versioned format is as follows: ```text channel_encoding = channel_version_byte ++ compress(rlp_batches) ``` The `channel_version_byte` must never have its 4 lower order bits set to `0b1000 = 8` or `0b1111 = 15`, which are reserved for usage by the header byte of zlib encoded data (see page 5 of [RFC-1950][rfc1950]). This allows a channel decoder to determine if a channel encoding is legacy or versioned format by testing for these bit values. If the channel encoding is determined to be versioned format, the only valid `channel_version_byte` is `1`, which indicates `compress()` is the Brotli compression algorithm (as specified in [RFC-7932][rfc7932]) with no custom dictionary. [rfc7932]: https://datatracker.ietf.org/doc/html/rfc7932 [rfc1950]: https://www.rfc-editor.org/rfc/rfc1950.html ## Network upgrade automation transactions The Fjord hardfork activation block contains the following transactions, in this order: * L1 Attributes Transaction * User deposits from L1 * Network Upgrade Transactions * GasPriceOracle deployment * Update GasPriceOracle Proxy ERC-1967 Implementation Slot * GasPriceOracle Enable Fjord To not modify or interrupt the system behavior around gas computation, this block will not include any sequenced transactions by setting `noTxPool: true`. ### GasPriceOracle Deployment The `GasPriceOracle` contract is upgraded to support the new Fjord L1 data fee computation. Post fork this contract will use FastLZ to compute the L1 data fee. To perform this upgrade, a deposit transaction is derived with the following attributes: * `from`: `0x4210000000000000000000000000000000000002` * `to`: `null`, * `mint`: `0` * `value`: `0` * `gasLimit`: `1,450,000` * `data`: `0x60806040523...` (full bytecode) * `sourceHash`: `0x86122c533fdcb89b16d8713174625e44578a89751d96c098ec19ab40a51a8ea3` computed with the "Upgrade-deposited" type, with \`intent = "Fjord: Gas Price Oracle Deployment" This results in the Fjord GasPriceOracle contract being deployed to `0xa919894851548179A0750865e7974DA599C0Fac7`, to verify: ```bash cast compute-address --nonce=0 0x4210000000000000000000000000000000000002 Computed Address: 0xa919894851548179A0750865e7974DA599C0Fac7 ``` Verify `sourceHash`: ```bash cast keccak $(cast concat-hex 0x0000000000000000000000000000000000000000000000000000000000000002 $(cast keccak "Fjord: Gas Price Oracle Deployment")) # 0x86122c533fdcb89b16d8713174625e44578a89751d96c098ec19ab40a51a8ea3 ``` Verify `data`: ```bash git checkout 52abfb507342191ae1f960b443ae8aec7598755c pnpm clean && pnpm install && pnpm build jq -r ".bytecode.object" packages/contracts-bedrock/forge-artifacts/GasPriceOracle.sol/GasPriceOracle.json ``` This transaction MUST deploy a contract with the following code hash `0xa88fa50a2745b15e6794247614b5298483070661adacb8d32d716434ed24c6b2`. ### GasPriceOracle Proxy Update This transaction updates the GasPriceOracle Proxy ERC-1967 implementation slot to point to the new GasPriceOracle deployment. A deposit transaction is derived with the following attributes: * `from`: `0x0000000000000000000000000000000000000000` * `to`: `0x420000000000000000000000000000000000000F` (Gas Price Oracle Proxy) * `mint`: `0` * `value`: `0` * `gasLimit`: `50,000` * `data`: `0x3659cfe6000000000000000000000000a919894851548179a0750865e7974da599c0fac7` * `sourceHash`: `0x1e6bb0c28bfab3dc9b36ffb0f721f00d6937f33577606325692db0965a7d58c6` computed with the "Upgrade-deposited" type, with `intent = "Fjord: Gas Price Oracle Proxy Update"` Verify data: ```bash cast concat-hex $(cast sig "upgradeTo(address)") $(cast abi-encode "upgradeTo(address)" 0xa919894851548179A0750865e7974DA599C0Fac7) # 0x3659cfe6000000000000000000000000a919894851548179a0750865e7974da599c0fac7 ``` Verify `sourceHash`: ```bash cast keccak $(cast concat-hex 0x0000000000000000000000000000000000000000000000000000000000000002 $(cast keccak "Fjord: Gas Price Oracle Proxy Update")) # 0x1e6bb0c28bfab3dc9b36ffb0f721f00d6937f33577606325692db0965a7d58c6 ``` ### GasPriceOracle Enable Fjord This transaction informs the GasPriceOracle to start using the Fjord gas calculation formula. A deposit transaction is derived with the following attributes: * `from`: `0xDeaDDEaDDeAdDeAdDEAdDEaddeAddEAdDEAd0001` (Depositer Account) * `to`: `0x420000000000000000000000000000000000000F` (Gas Price Oracle Proxy) * `mint`: `0` * `value`: `0` * `gasLimit`: `90,000` * `data`: `0x8e98b106` * `sourceHash`: `0xbac7bb0d5961cad209a345408b0280a0d4686b1b20665e1b0f9cdafd73b19b6b`, computed with the "Upgrade-deposited" type, with \`intent = "Fjord: Gas Price Oracle Set Fjord" Verify data: ```bash cast sig "setFjord()" 0x8e98b106 ``` Verify `sourceHash`: ```bash cast keccak $(cast concat-hex 0x0000000000000000000000000000000000000000000000000000000000000002 $(cast keccak "Fjord: Gas Price Oracle Set Fjord")) # 0xbac7bb0d5961cad209a345408b0280a0d4686b1b20665e1b0f9cdafd73b19b6b ``` ## L2 Execution Engine ### Fees #### L1-Cost fees (L1 Fee Vault) ##### Fjord L1-Cost fee changes (FastLZ estimator) Fjord updates the L1 cost calculation function to use a FastLZ-based compression estimator. The L1 cost is computed as: ```pseudocode l1FeeScaled = l1BaseFeeScalar*l1BaseFee*16 + l1BlobFeeScalar*l1BlobBaseFee estimatedSizeScaled = max(minTransactionSize * 1e6, intercept + fastlzCoef*fastlzSize) l1Fee = estimatedSizeScaled * l1FeeScaled / 1e12 ``` The final `l1Fee` computation is an unlimited precision unsigned integer computation, with the result in Wei and having `uint256` range. The values in this computation, are as follows: | Input arg | Type | Description | Value | | -------------------- | --------- | ----------------------------------------------------------------- | ------------------------ | | `l1BaseFee` | `uint256` | L1 base fee of the latest L1 origin registered in the L2 chain | varies, L1 fee | | `l1BlobBaseFee` | `uint256` | Blob gas price of the latest L1 origin registered in the L2 chain | varies, L1 fee | | `fastlzSize` | `uint256` | Size of the FastLZ-compressed RLP-encoded signed tx | varies, per transaction | | `l1BaseFeeScalar` | `uint32` | L1 base fee scalar, scaled by `1e6` | varies, L2 configuration | | `l1BlobFeeScalar` | `uint32` | L1 blob fee scalar, scaled by `1e6` | varies, L2 configuration | | `intercept` | `int32` | Intercept constant, scaled by `1e6` (can be negative) | -42\_585\_600 | | `fastlzCoef` | `uint32` | FastLZ coefficient, scaled by `1e6` | 836\_500 | | `minTransactionSize` | `uint32` | A lower bound on transaction size, in bytes | 100 | Previously, `l1BaseFeeScalar` and `l1BlobFeeScalar` were used to encode the compression ratio, due to the inaccuracy of the L1 cost function. However, the new cost function takes into account the compression ratio, so these scalars should be adjusted to account for any previous compression ratio they encoded. ##### FastLZ Implementation All compression algorithms must be implemented equivalently to the `fastlz_compress` function in `fastlz.c` at the following [commit](https://github.com/ariya/FastLZ/blob/344eb4025f9ae866ebf7a2ec48850f7113a97a42/fastlz.c#L482-L506). ##### L1-Cost linear regression details The `intercept` and `fastlzCoef` constants are calculated by linear regression using a dataset of previous L2 transactions. The dataset is generated by iterating over all transactions in a given time range, and performing the following actions. For each transaction: 1. Compress the payload using FastLZ. Record the size of the compressed payload as `fastlzSize`. 2. Emulate the change in batch size adding the transaction to a batch, compressed with Brotli 10. Record the change in batch size as `bestEstimateSize`. Once this dataset is generated, a linear regression can be calculated using the `bestEstimateSize` as the dependent variable and `fastlzSize` as the independent variable. We generated a dataset from two weeks of post-Ecotone transactions on Optimism Mainnet, as we found that was the most representative of performance across multiple chains and time periods. More details on the linear regression and datasets used can be found in this [repository](https://github.com/roberto-bayardo/compression-analysis/tree/main). #### L1 Gas Usage Estimation The `L1GasUsed` property is deprecated due to it not capturing the L1 blob gas used by a transaction, and will be removed in a future network upgrade. Users can continue to use the `L1Fee` field to retrieve the L1 fee for a given transaction. ## Fjord ### Activation Timestamps | Network | Activation timestamp | | --------- | -------------------------------------- | | `mainnet` | `1720627201` (2024-07-10 16:00:01 UTC) | | `sepolia` | `1716998400` (2024-05-29 16:00:00 UTC) | ### Execution Layer * [RIP-7212: Precompile for secp256r1 Curve Support](/protocol/execution/evm/precompiles#P256VERIFY) * [FastLZ compression for L1 data fee calculation](/upgrades/fjord/exec-engine#fees) * [Deprecate the `getL1GasUsed` method on the `GasPriceOracle` contract](/upgrades/fjord/predeploys#l1-gas-usage-estimation) * [Deprecate the `L1GasUsed` field on the transaction receipt](/upgrades/fjord/exec-engine#l1-gas-usage-estimation) ### Consensus Layer * [Constant maximum sequencer drift](/upgrades/fjord/derivation#constant-maximum-sequencer-drift) * [Brotli channel compression](/upgrades/fjord/derivation#brotli-channel-compression) * [Increase Max Bytes Per Channel and Max Channel Bank Size](/upgrades/fjord/derivation#increasing-max_rlp_bytes_per_channel-and-max_channel_bank_size) ## Predeploys ### GasPriceOracle Following the Fjord upgrade, three additional values used for L1 fee computation are: * costIntercept * costFastlzCoef * minTransactionSize These values are hard-coded constants in the `GasPriceOracle` contract. The calculation follows the same formula outlined in the [Fjord L1-Cost fee changes (FastLZ estimator)](exec-engine.md#fjord-l1-cost-fee-changes-fastlz-estimator) section. A new method is introduced: `getL1FeeUpperBound(uint256)`. This method returns an upper bound for the L1 fee for a given transaction size. It is provided for callers who wish to estimate L1 transaction costs in the write path, and is much more gas efficient than `getL1Fee`. The upper limit overhead is assumed to be `original/255+16`, borrowed from LZ4. According to historical data, this approach can encompass more than 99.99% of transactions. This is implemented as follows: ```solidity function getL1FeeUpperBound(uint256 unsignedTxSize) external view returns (uint256) { // Add 68 to account for unsigned tx uint256 txSize = unsignedTxSize + 68; // txSize / 255 + 16 is the practical fastlz upper-bound covers 99.99% txs. uint256 flzUpperBound = txSize + txSize / 255 + 16; int256 estimatedSize = costIntercept + costFastlzCoef * flzUpperBound; if (estimatedSize < minTransactionSize) { estimatedSize = minTransactionSize; } uint256 l1FeeScaled = baseFeeScalar() * l1BaseFee() * 16 + blobBaseFeeScalar() * blobBaseFee(); return uint256(estimatedSize) * l1FeeScaled / (10 ** (DECIMALS * 2)); } ``` #### L1 Gas Usage Estimation The `getL1GasUsed` method is updated to take into account the improved [compression estimation](exec-engine.md#fees) accuracy as part of the Fjord upgrade. ```solidity function getL1GasUsed(bytes memory _data) public view returns (uint256) { if (isFjord) { // Add 68 to the size to account for the unsigned tx int256 flzSize = LibZip.flzCompress(_data).length + 68; int256 estimatedSize = costIntercept + costFastlzCoef * flzSize; if (estimatedSize < minTransactionSize) { estimatedSize = minTransactionSize; } // Assume the compressed data is mostly non-zero, and would pay 16 gas per calldata byte return estimatedSize * 16; } // ... } ``` The `getL1GasUsed` method is deprecated as of Fjord because it does not capture that there are two kinds of gas being consumed due to the introduction of blobs. This function will revert when called in a future upgrade. Users can continue to use the `getL1Fee` method to estimate the L1 fee for a given transaction, or the new `getL1FeeUpperBound` method introduced by Fjord as a lower gas alternative. ## Derivation ### Ecotone: Blob Retrieval With the Ecotone upgrade the retrieval stage is extended to support an additional DA source: [EIP-4844] blobs. After the Ecotone upgrade we modify the iteration over batcher transactions to treat transactions of transaction-type == `0x03` (`BLOB_TX_TYPE`) differently. If the batcher transaction is a blob transaction, then its calldata MUST be ignored should it be present. Instead: * For each blob hash in `blob_versioned_hashes`, retrieve the blob that matches it. A blob may be retrieved from any of a number different sources. Retrieval from a local beacon-node, through the `/eth/v1/beacon/blob_sidecars/` endpoint, with `indices` filter to skip unrelated blobs, is recommended. For each retrieved blob: * The blob SHOULD (MUST, if the source is untrusted) be cryptographically verified against its versioned hash. * If the blob has a [valid encoding](#blob-encoding), decode it into its continuous byte-string and pass that on to the next phase. Otherwise the blob is ignored. Note that batcher transactions of type blob must be processed in the same loop as other batcher transactions to preserve the invariant that batches are always processed in the order they appear in the block. We ignore calldata in blob transactions so that it may be used in the future for batch metadata or other purposes. ### Blob Encoding Each blob in a [EIP-4844] transaction really consists of `FIELD_ELEMENTS_PER_BLOB = 4096` field elements. Each field element is a number in a prime field of `BLS_MODULUS = 52435875175126190479447740508185965837690552500527637822603658699938581184513`. This number does not represent a full `uint256`: `math.log2(BLS_MODULUS) = 254.8570894...` The [L1 consensus-specs](https://github.com/ethereum/consensus-specs/blob/master/specs/deneb/polynomial-commitments.md) describe the encoding of this polynomial. The field elements are encoded as big-endian integers (`KZG_ENDIANNESS = big`). To save computational overhead, only `254` bits per field element are used for rollup data. For efficient data encoding, `254` bits (equivalent to `31.75` bytes) are utilized. `4` elements combine to effectively use `127` bytes. `127` bytes of application-layer rollup data is encoded at a time, into 4 adjacent field elements of the blob: ```python # read(N): read the next N bytes from the application-layer rollup-data. The next read starts where the last stopped. # write(V): append V (one or more bytes) to the raw blob. bytes tailA = read(31) byte x = read(1) byte A = x & 0b0011_1111 write(A) write(tailA) bytes tailB = read(31) byte y = read(1) byte B = (y & 0b0000_1111) | (x & 0b1100_0000) >> 2) write(B) write(tailB) bytes tailC = read(31) byte z = read(1) byte C = z & 0b0011_1111 write(C) write(tailC) bytes tailD = read(31) byte D = ((z & 0b1100_0000) >> 2) | ((y & 0b1111_0000) >> 4) write(D) write(tailD) ``` Each written field element looks like this: * Starts with one of the prepared 6-bit left-padded byte values, to keep the field element within valid range. * Followed by 31 bytes of application-layer data, to fill the low 31 bytes of the field element. The written output should look like this: ```text <----- element 0 -----><----- element 1 -----><----- element 2 -----><----- element 3 -----> | byte A | tailA... || byte B | tailB... || byte C | tailC... || byte D | tailD... | ``` The above is repeated 1024 times, to fill all `4096` elements, with a total of `(4 * 31 + 3) * 1024 = 130048` bytes of data. When decoding a blob, the top-most two bits of each field-element must be 0, to make the encoding/decoding bijective. The first byte of rollup-data (second byte in first field element) is used as a version-byte. In version `0`, the next 3 bytes of data are used to encode the length of the rollup-data, as big-endian `uint24`. Any trailing data, past the length delimiter, must be 0, to keep the encoding/decoding bijective. If the length is larger than `130048 - 4`, the blob is invalid. If any of the encoding is invalid, the blob as a whole must be ignored. [EIP-4844]: https://eips.ethereum.org/EIPS/eip-4844 ### Network upgrade automation transactions The Ecotone hardfork activation block contains the following transactions, in this order: * L1 Attributes Transaction, using the pre-Ecotone `setL1BlockValues` * User deposits from L1 * Network Upgrade Transactions * L1Block deployment * GasPriceOracle deployment * Update L1Block Proxy ERC-1967 Implementation Slot * Update GasPriceOracle Proxy ERC-1967 Implementation Slot * GasPriceOracle Enable Ecotone * Beacon block roots contract deployment (EIP-4788) To not modify or interrupt the system behavior around gas computation, this block will not include any sequenced transactions by setting `noTxPool: true`. #### L1Block Deployment The `L1Block` contract is upgraded to process the new Ecotone L1-data-fee parameters and L1 blob base-fee. A deposit transaction is derived with the following attributes: * `from`: `0x4210000000000000000000000000000000000000` * `to`: `null` * `mint`: `0` * `value`: `0` * `gasLimit`: `375,000` * `data`: `0x60806040523480156100105...` (full bytecode) * `sourceHash`: `0x877a6077205782ea15a6dc8699fa5ebcec5e0f4389f09cb8eda09488231346f8`, computed with the "Upgrade-deposited" type, with \`intent = "Ecotone: L1 Block Deployment" This results in the Ecotone L1Block contract being deployed to `0x07dbe8500fc591d1852B76feE44d5a05e13097Ff`, to verify: ```bash cast compute-address --nonce=0 0x4210000000000000000000000000000000000000 Computed Address: 0x07dbe8500fc591d1852B76feE44d5a05e13097Ff ``` Verify `sourceHash`: ```bash cast keccak $(cast concat-hex 0x0000000000000000000000000000000000000000000000000000000000000002 $(cast keccak "Ecotone: L1 Block Deployment")) # 0x877a6077205782ea15a6dc8699fa5ebcec5e0f4389f09cb8eda09488231346f8 ``` Verify `data`: ```bash git checkout 5996d0bc1a4721f2169ba4366a014532f31ea932 pnpm clean && pnpm install && pnpm build jq -r ".bytecode.object" packages/contracts-bedrock/forge-artifacts/L1Block.sol/L1Block.json ``` This transaction MUST deploy a contract with the following code hash `0xc88a313aa75dc4fbf0b6850d9f9ae41e04243b7008cf3eadb29256d4a71c1dfd`. #### GasPriceOracle Deployment The `GasPriceOracle` contract is upgraded to support the new Ecotone L1-data-fee parameters. Post fork this contract will use the blob base fee to compute the gas price for L1-data-fee transactions. A deposit transaction is derived with the following attributes: * `from`: `0x4210000000000000000000000000000000000001` * `to`: `null`, * `mint`: `0` * `value`: `0` * `gasLimit`: `1,000,000` * `data`: `0x60806040523480156100...` (full bytecode) * `sourceHash`: `0xa312b4510adf943510f05fcc8f15f86995a5066bd83ce11384688ae20e6ecf42` computed with the "Upgrade-deposited" type, with \`intent = "Ecotone: Gas Price Oracle Deployment" This results in the Ecotone GasPriceOracle contract being deployed to `0xb528D11cC114E026F138fE568744c6D45ce6Da7A`, to verify: ```bash cast compute-address --nonce=0 0x4210000000000000000000000000000000000001 Computed Address: 0xb528D11cC114E026F138fE568744c6D45ce6Da7A ``` Verify `sourceHash`: ```bash ❯ cast keccak $(cast concat-hex 0x0000000000000000000000000000000000000000000000000000000000000002 $(cast keccak "Ecotone: Gas Price Oracle Deployment")) # 0xa312b4510adf943510f05fcc8f15f86995a5066bd83ce11384688ae20e6ecf42 ``` Verify `data`: ```bash git checkout 5996d0bc1a4721f2169ba4366a014532f31ea932 pnpm clean && pnpm install && pnpm build jq -r ".bytecode.object" packages/contracts-bedrock/forge-artifacts/GasPriceOracle.sol/GasPriceOracle.json ``` This transaction MUST deploy a contract with the following code hash `0x8b71360ea773b4cfaf1ae6d2bd15464a4e1e2e360f786e475f63aeaed8da0ae5`. #### L1Block Proxy Update This transaction updates the L1Block Proxy ERC-1967 implementation slot to point to the new L1Block deployment. A deposit transaction is derived with the following attributes: * `from`: `0x0000000000000000000000000000000000000000` * `to`: `0x4200000000000000000000000000000000000015` (L1Block Proxy) * `mint`: `0` * `value`: `0` * `gasLimit`: `50,000` * `data`: `0x3659cfe600000000000000000000000007dbe8500fc591d1852b76fee44d5a05e13097ff` * `sourceHash`: `0x18acb38c5ff1c238a7460ebc1b421fa49ec4874bdf1e0a530d234104e5e67dbc` computed with the "Upgrade-deposited" type, with \`intent = "Ecotone: L1 Block Proxy Update" Verify data: ```bash cast concat-hex $(cast sig "upgradeTo(address)") $(cast abi-encode "upgradeTo(address)" 0x07dbe8500fc591d1852B76feE44d5a05e13097Ff) 0x3659cfe600000000000000000000000007dbe8500fc591d1852b76fee44d5a05e13097ff ``` Verify `sourceHash`: ```bash cast keccak $(cast concat-hex 0x0000000000000000000000000000000000000000000000000000000000000002 $(cast keccak "Ecotone: L1 Block Proxy Update")) # 0x18acb38c5ff1c238a7460ebc1b421fa49ec4874bdf1e0a530d234104e5e67dbc ``` #### GasPriceOracle Proxy Update This transaction updates the GasPriceOracle Proxy ERC-1967 implementation slot to point to the new GasPriceOracle deployment. A deposit transaction is derived with the following attributes: * `from`: `0x0000000000000000000000000000000000000000` * `to`: `0x420000000000000000000000000000000000000F` (Gas Price Oracle Proxy) * `mint`: `0` * `value`: `0` * `gasLimit`: `50,000` * `data`: `0x3659cfe6000000000000000000000000b528d11cc114e026f138fe568744c6d45ce6da7a` * `sourceHash`: `0xee4f9385eceef498af0be7ec5862229f426dec41c8d42397c7257a5117d9230a` computed with the "Upgrade-deposited" type, with `intent = "Ecotone: Gas Price Oracle Proxy Update"` Verify data: ```bash cast concat-hex $(cast sig "upgradeTo(address)") $(cast abi-encode "upgradeTo(address)" 0xb528D11cC114E026F138fE568744c6D45ce6Da7A) 0x3659cfe6000000000000000000000000b528d11cc114e026f138fe568744c6d45ce6da7a ``` Verify `sourceHash`: ```bash cast keccak $(cast concat-hex 0x0000000000000000000000000000000000000000000000000000000000000002 $(cast keccak "Ecotone: Gas Price Oracle Proxy Update")) # 0xee4f9385eceef498af0be7ec5862229f426dec41c8d42397c7257a5117d9230a ``` #### GasPriceOracle Enable Ecotone This transaction informs the GasPriceOracle to start using the Ecotone gas calculation formula. A deposit transaction is derived with the following attributes: * `from`: `0xDeaDDEaDDeAdDeAdDEAdDEaddeAddEAdDEAd0001` (Depositer Account) * `to`: `0x420000000000000000000000000000000000000F` (Gas Price Oracle Proxy) * `mint`: `0` * `value`: `0` * `gasLimit`: `80,000` * `data`: `0x22b90ab3` * `sourceHash`: `0x0c1cb38e99dbc9cbfab3bb80863380b0905290b37eb3d6ab18dc01c1f3e75f93`, computed with the "Upgrade-deposited" type, with \`intent = "Ecotone: Gas Price Oracle Set Ecotone" Verify data: ```bash cast sig "setEcotone()" 0x22b90ab3 ``` Verify `sourceHash`: ```bash cast keccak $(cast concat-hex 0x0000000000000000000000000000000000000000000000000000000000000002 $(cast keccak "Ecotone: Gas Price Oracle Set Ecotone")) # 0x0c1cb38e99dbc9cbfab3bb80863380b0905290b37eb3d6ab18dc01c1f3e75f93 ``` #### Beacon block roots contract deployment (EIP-4788) [EIP-4788] introduces a "Beacon block roots" contract, that processes and exposes the beacon-block-root values. at address `BEACON_ROOTS_ADDRESS = 0x000F3df6D732807Ef1319fB7B8bB8522d0Beac02`. For deployment, [EIP-4788] defines a pre-[EIP-155] legacy transaction, sent from a key that is derived such that the transaction signature validity is bound to message-hash, which is bound to the input-data, containing the init-code. However, this type of transaction requires manual deployment and gas-payments. And since the processing is an integral part of the chain processing, and has to be repeated for Base, the deployment is approached differently here. Some chains may already have a user-submitted instance of the [EIP-4788] transaction. This is cryptographically guaranteed to be correct, but may result in the upgrade transaction deploying a second contract, with the next nonce. The result of this deployment can be ignored. A Deposit transaction is derived with the following attributes: * `from`: `0x0B799C86a49DEeb90402691F1041aa3AF2d3C875`, as specified in the EIP. * `to`: null * `mint`: `0` * `value`: `0` * `gasLimit`: `0x3d090`, as specified in the EIP. * `isCreation`: `true` * `data`: `0x60618060095f395ff33373fffffffffffffffffffffffffffffffffffffffe14604d57602036146024575f5ffd5b5f35801560495762001fff810690815414603c575f5ffd5b62001fff01545f5260205ff35b5f5ffd5b62001fff42064281555f359062001fff015500` * `isSystemTx`: `false`, even the system-generated transactions spend gas. * `sourceHash`: `0x69b763c48478b9dc2f65ada09b3d92133ec592ea715ec65ad6e7f3dc519dc00c`, computed with the "Upgrade-deposited" type, with `intent = "Ecotone: beacon block roots contract deployment"` The contract address upon deployment is computed as `rlp([sender, nonce])`, which will equal: * `BEACON_ROOTS_ADDRESS` if deployed * a different address (`0xE3aE1Ae551eeEda337c0BfF6C4c7cbA98dce353B`) if `nonce = 1`: when a user already submitted the EIP transaction before the upgrade. Verify `BEACON_ROOTS_ADDRESS`: ```bash cast compute-address --nonce=0 0x0B799C86a49DEeb90402691F1041aa3AF2d3C875 # Computed Address: 0x000F3df6D732807Ef1319fB7B8bB8522d0Beac02 ``` Verify `sourceHash`: ```bash cast keccak $(cast concat-hex 0x0000000000000000000000000000000000000000000000000000000000000002 $(cast keccak "Ecotone: beacon block roots contract deployment")) # 0x69b763c48478b9dc2f65ada09b3d92133ec592ea715ec65ad6e7f3dc519dc00c ``` [EIP-4788]: https://eips.ethereum.org/EIPS/eip-4788 [EIP-155]: https://eips.ethereum.org/EIPS/eip-155 ## Ecotone L1 Attributes ### Overview On the Ecotone activation block, and if Ecotone is not activated at Genesis, the L1 Attributes Transaction includes a call to `setL1BlockValues()` because the L1 Attributes transaction precedes the [Ecotone Upgrade Transactions][ecotone-upgrade-txs], meaning that `setL1BlockValuesEcotone` is not guaranteed to exist yet. Every subsequent L1 Attributes transaction should include a call to the `setL1BlockValuesEcotone()` function. The input args are no longer ABI encoded function parameters, but are instead packed into 5 32-byte aligned segments (starting after the function selector). Each unsigned integer argument is encoded as big-endian using a number of bytes corresponding to the underlying type. The overall calldata layout is as follows: [ecotone-upgrade-txs]: derivation.md#network-upgrade-automation-transactions | Input arg | Type | Calldata bytes | Segment | | ----------------- | ------- | -------------- | ------- | | {0x440a5e20} | | 0-3 | n/a | | baseFeeScalar | uint32 | 4-7 | 1 | | blobBaseFeeScalar | uint32 | 8-11 | | | sequenceNumber | uint64 | 12-19 | | | l1BlockTimestamp | uint64 | 20-27 | | | l1BlockNumber | uint64 | 28-35 | | | basefee | uint256 | 36-67 | 2 | | blobBaseFee | uint256 | 68-99 | 3 | | l1BlockHash | bytes32 | 100-131 | 4 | | batcherHash | bytes32 | 132-163 | 5 | Total calldata length MUST be exactly 164 bytes, implying the sixth and final segment is only partially filled. This helps to slow database growth as every L2 block includes a L1 Attributes deposit transaction. In the first L2 block after the Ecotone activation block, the Ecotone L1 attributes are first used. The pre-Ecotone values are migrated over 1:1. Blocks after the Ecotone activation block contain all pre-Ecotone values 1:1, and also set the following new attributes: * The `baseFeeScalar` is set to the pre-Ecotone `scalar` value. * The `blobBaseFeeScalar` is set to `0`. * The pre-Ecotone `overhead` attribute is dropped. * The `blobBaseFee` is set to the L1 blob base fee of the L1 origin block. Or `1` if the L1 block does not support blobs. The `1` value is derived from the EIP-4844 `MIN_BLOB_GASPRICE`. Note that the L1 blob bas fee is *not* exposed as a part of the L1 origin block. It must be computed using an parameterized off-chain formula which takes the excess blob gas field from the header of the L1 origin block as described in [EIP-4844](https://eips.ethereum.org/EIPS/eip-4844#base-fee-per-blob-gas-update-rule). The `BLOB_BASE_FEE_UPDATE_FRACTION` parameter in the formula varies according to which L1 fork is active at the origin block (see e.g. [EIP-7691](https://eips.ethereum.org/EIPS/eip-7691)). It is therefore necessary for L2 consensus layer clients to know the blob parameters and activation time for each L1 fork to compute the `blobBaseFee` correctly. Blob Parameter Only (BPO) forks, introduced in [EIP-7892](https://eips.ethereum.org/EIPS/eip-7892) can mean that `BLOB_BASE_FEE_UPDATE_FRACTION` is updated frequently: that clients and proof programs therefore need to stay up to date with such forks. ### L1 Attributes Predeployed Contract [sys-config]: ../../protocol/consensus/derivation.md#system-configuration The L1 Attributes predeploy stores the following values: * L1 block attributes: * `number` (`uint64`) * `timestamp` (`uint64`) * `basefee` (`uint256`) * `hash` (`bytes32`) * `blobBaseFee` (`uint256`) * `sequenceNumber` (`uint64`): This equals the L2 block number relative to the start of the epoch, i.e. the L2 block distance to the L2 block height that the L1 attributes last changed, and reset to 0 at the start of a new epoch. * System configurables tied to the L1 block, see [System configuration specification][sys-config]: * `batcherHash` (`bytes32`): A versioned commitment to the batch-submitter(s) currently operating. * `baseFeeScalar` (`uint32`): system configurable to scale the `basefee` in the Ecotone l1 cost computation * `blobBasefeeScalar` (`uint32`): system configurable to scale the `blobBaseFee` in the Ecotone l1 cost computation The `overhead` and `scalar` values can continue to be accessed after the Ecotone activation block, but no longer have any effect on system operation. These fields were also known as the `l1FeeOverhead` and the `l1FeeScalar`. After running `pnpm build` in the `packages/contracts-bedrock` directory, the bytecode to add to the genesis file will be located in the `deployedBytecode` field of the build artifacts file at `/packages/contracts-bedrock/forge-artifacts/L1Block.sol/L1Block.json`. #### Ecotone L1Block upgrade The L1 Attributes Predeployed contract, `L1Block.sol`, is upgraded as part of the Ecotone upgrade. The version is incremented to `1.2.0`, one new storage slot is introduced, and one existing slot begins to store additional data: * `blobBaseFee` (`uint256`): The L1 blob base fee. * `blobBaseFeeScalar` (`uint32`): The scalar value applied to the L1 blob base fee portion of the L1 cost. * `baseFeeScalar` (`uint32`): The scalar value applied to the L1 base fee portion of the L1 cost. The function called by the L1 attributes transaction depends on the network upgrade: * Before the Ecotone activation: * `setL1BlockValues` is called, following the pre-Ecotone L1 attributes rules. * At the Ecotone activation block: * `setL1BlockValues` function MUST be called, except if activated at genesis. The contract is upgraded later in this block, to support `setL1BlockValuesEcotone`. * After the Ecotone activation: * `setL1BlockValues` function is deprecated and MUST never be called. * `setL1BlockValuesEcotone` MUST be called with the new Ecotone attributes. `setL1BlockValuesEcotone` uses a tightly packed encoding for its parameters, which is described in [L1 Attributes Deposited Transaction Calldata](../../protocol/bridging/deposits.md#l1-attributes-deposited-transaction-calldata). ## Ecotone ### Activation Timestamps | Network | Activation timestamp | | --------- | -------------------------------------- | | `mainnet` | `1710374401` (2024-03-14 00:00:01 UTC) | | `sepolia` | `1708534800` (2024-02-21 17:00:00 UTC) | The Ecotone upgrade contains the Dencun upgrade from L1, and adopts EIP-4844 blobs for data-availability. ### Execution Layer * Cancun (Execution Layer): * [EIP-1153: Transient storage opcodes](https://eips.ethereum.org/EIPS/eip-1153) * [EIP-4844: Shard Blob Transactions](https://eips.ethereum.org/EIPS/eip-4844) * [Blob transactions are disabled](../../protocol/execution/index.md#ecotone-disable-blob-transactions) * [EIP-4788: Beacon block root in the EVM](https://eips.ethereum.org/EIPS/eip-4788) * [The L1 beacon block root is embedded into L2](../../protocol/execution/index.md#ecotone-beacon-block-root) * [The Beacon roots contract deployment is automated](../../protocol/consensus/derivation.md#ecotone-beacon-block-roots-contract-deployment-eip-4788) * [EIP-5656: MCOPY - Memory copying instruction](https://eips.ethereum.org/EIPS/eip-5656) * [EIP-6780: SELFDESTRUCT only in same transaction](https://eips.ethereum.org/EIPS/eip-6780) * [EIP-7516: BLOBBASEFEE opcode](https://eips.ethereum.org/EIPS/eip-7516) * [BLOBBASEFEE always pushes 1 onto the stack](../../protocol/execution/index.md#ecotone-disable-blob-transactions) * Deneb (Consensus Layer): *not applicable to L2* * [EIP-7044: Perpetually Valid Signed Voluntary Exits](https://eips.ethereum.org/EIPS/eip-7044) * [EIP-7045: Increase Max Attestation Inclusion Slot](https://eips.ethereum.org/EIPS/eip-7045) * [EIP-7514: Add Max Epoch Churn Limit](https://eips.ethereum.org/EIPS/eip-7514) ### Consensus Layer [retrieval]: ../../protocol/consensus/derivation.md#ecotone-blob-retrieval [predeploy]: l1-attributes.md#ecotone-l1block-upgrade * Blobs Data Availability: support blobs DA the [L1 Data-retrieval stage][retrieval]. * Rollup fee update: support blobs DA in [L1 Data Fee computation](../../protocol/execution/index.md#ecotone-l1-cost-fee-changes-eip-4844-da) * Auto-upgrading and extension of the [L1 Attributes Predeployed Contract][predeploy] (also known as `L1Block` predeploy) ## Delta ### Activation Timestamps | Network | Activation timestamp | | --------- | -------------------------------------- | | `mainnet` | `1708560000` (2024-02-22 00:00:00 UTC) | | `sepolia` | `1703203200` (2023-12-22 00:00:00 UTC) | The Delta upgrade uses a *L2 block-timestamp* activation-rule, and is specified only in the rollup-node (`delta_time`). ### Consensus Layer [span-batches]: span-batches.md The Delta upgrade consists of a single consensus-layer feature: [Span Batches][span-batches]. ## Span-batches [g-deposit-tx-type]: ../../reference/glossary.md#deposited-transaction-type [derivation]: ../../protocol/consensus/derivation.md [channel-format]: ../../protocol/consensus/derivation.md#channel-format [batch-format]: ../../protocol/consensus/derivation.md#batch-format [frame-format]: ../../protocol/consensus/derivation.md#frame-format [batch-queue]: ../../protocol/consensus/derivation.md#batch-queue [batcher]: ../../protocol/batcher.md ### Introduction Span-batch is a new batching spec that reduces overhead, introduced in the [Delta](overview.md) network upgrade. The overhead is reduced by representing a span of consecutive L2 blocks in a more efficient manner, while preserving the same consistency checks as regular batch data. Note that the [channel][channel-format] and [frame][frame-format] formats stay the same: data slicing, packing and multi-transaction transport is already optimized. The overhead in the [V0 batch format][derivation] comes from: * The meta-data attributes are repeated for every L2 block, while these are mostly implied already: * parent hash (32 bytes) * L1 epoch: blockhash (32 bytes) and block number (\~4 bytes) * timestamp (\~4 bytes) * The organization of block data is inefficient: * Similar attributes are far apart, diminishing any chances of effective compression. * Random data like hashes are positioned in-between the more compressible application data. * The RLP encoding of the data adds unnecessary overhead * The outer list does not have to be length encoded, the attributes are known * Fixed-length attributes do not need any encoding * The batch-format is static and can be optimized further * Remaining meta-data for consistency checks can be optimized further: * The metadata only needs to be secure for consistency checks. E.g. 20 bytes of a hash may be enough. Span-batches address these inefficiencies, with a new batch format version. ### Span batch format [span-batch-format]: #span-batch-format Note that span-batches, unlike previous singular batches, encode *a range of consecutive* L2 blocks at the same time. Introduce version `1` to the [batch-format][batch-format] table: | `batch_version` | `content` | | --------------- | ------------------- | | 1 | `prefix ++ payload` | Notation: * `++`: concatenation of byte-strings * `span_start`: first L2 block in the span * `span_end`: last L2 block in the span * `uvarint`: unsigned Base128 varint, as defined in [protobuf spec] * `rlp_encode`: a function that encodes a batch according to the RLP format, and `[x, y, z]` denotes a list containing items `x`, `y` and `z` [protobuf spec]: https://protobuf.dev/programming-guides/encoding/#varints Standard bitlists, in the context of span-batches, are encoded as big-endian integers, left-padded with zeroes to the next multiple of 8 bits. Where: * `prefix = rel_timestamp ++ l1_origin_num ++ parent_check ++ l1_origin_check` * `rel_timestamp`: `uvarint` relative timestamp since L2 genesis, i.e. `span_start.timestamp - config.genesis.timestamp`. * `l1_origin_num`: `uvarint` number of last l1 origin number. i.e. `span_end.l1_origin.number` * `parent_check`: first 20 bytes of parent hash, the hash is truncated to 20 bytes for efficiency, i.e. `span_start.parent_hash[:20]`. * `l1_origin_check`: the block hash of the last L1 origin is referenced. The hash is truncated to 20 bytes for efficiency, i.e. `span_end.l1_origin.hash[:20]`. * `payload = block_count ++ origin_bits ++ block_tx_counts ++ txs`: * `block_count`: `uvarint` number of L2 blocks. This is at least 1, empty span batches are invalid. * `origin_bits`: standard bitlist of `block_count` bits: 1 bit per L2 block, indicating if the L1 origin changed this L2 block. * `block_tx_counts`: for each block, a `uvarint` of `len(block.transactions)`. * `txs`: L2 transactions which is reorganized and encoded as below. * `txs = contract_creation_bits ++ y_parity_bits ++ tx_sigs ++ tx_tos ++ tx_datas ++ tx_nonces ++ tx_gases ++ protected_bits` * `contract_creation_bits`: standard bitlist of `sum(block_tx_counts)` bits: 1 bit per L2 transactions, indicating if transaction is a contract creation transaction. * `y_parity_bits`: standard bitlist of `sum(block_tx_counts)` bits: 1 bit per L2 transactions, indicating the y parity value when recovering transaction sender address. * `tx_sigs`: concatenated list of transaction signatures * `r` is encoded as big-endian `uint256` * `s` is encoded as big-endian `uint256` * `tx_tos`: concatenated list of `to` field. `to` field in contract creation transaction will be `nil` and ignored. * `tx_datas`: concatenated list of variable length rlp encoded data, matching the encoding of the fields as in the [EIP-2718] format of the `TransactionType`. * `legacy`: `rlp_encode(value, gasPrice, data)` * `1`: ([EIP-2930]): `0x01 ++ rlp_encode(value, gasPrice, data, accessList)` * `2`: ([EIP-1559]): `0x02 ++ rlp_encode(value, max_priority_fee_per_gas, max_fee_per_gas, data, access_list)` * `tx_nonces`: concatenated list of `uvarint` of `nonce` field. * `tx_gases`: concatenated list of `uvarint` of gas limits. * `legacy`: `gasLimit` * `1`: ([EIP-2930]): `gasLimit` * `2`: ([EIP-1559]): `gas_limit` * `protected_bits`: standard bitlist of length of number of legacy transactions: 1 bit per L2 legacy transactions, indicating if transaction is protected([EIP-155]) or not. [EIP-2718]: https://eips.ethereum.org/EIPS/eip-2718 [EIP-2930]: https://eips.ethereum.org/EIPS/eip-2930 [EIP-1559]: https://eips.ethereum.org/EIPS/eip-1559 [EIP-155]: https://eips.ethereum.org/EIPS/eip-155 #### Span Batch Size Limits The total size of an encoded span batch is limited to `MAX_RLP_BYTES_PER_CHANNEL`, which is defined in the [Protocol Parameters table](../../protocol/consensus/derivation.md#protocol-parameters). This is done at the channel level rather than at the span batch level. In addition to the byte limit, the number of blocks, and total transactions is limited to `MAX_SPAN_BATCH_ELEMENT_COUNT`. This does imply that the max number of transactions per block is also `MAX_SPAN_BATCH_ELEMENT_COUNT`. `MAX_SPAN_BATCH_ELEMENT_COUNT` is defined in [Protocol Parameters table](../../protocol/consensus/derivation.md#protocol-parameters). #### Future batch-format extension This is an experimental extension of the span-batch format, and not activated with the Delta upgrade yet. Introduce version `2` to the [batch-format][batch-format] table: | `batch_version` | `content` | | --------------- | ------------------- | | 2 | `prefix ++ payload` | Where: * `prefix = rel_timestamp ++ l1_origin_num ++ parent_check ++ l1_origin_check`: * Identical to `batch_version` 1 * `payload = block_count ++ origin_bits ++ block_tx_counts ++ txs ++ fee_recipients`: * An empty span-batch, i.e. with `block_count == 0`, is invalid and must not be processed. * Every field definition identical to `batch_version` 1 except that `fee_recipients` is added to support more decentralized sequencing. * `fee_recipients = fee_recipients_idxs + fee_recipients_set` * `fee_recipients_set`: concatenated list of unique L2 fee recipient address. * `fee_recipients_idxs`: for each block, `uvarint` number of index to decode fee recipients from `fee_recipients_set`. ### Span Batch Activation Rule The span batch upgrade is activated based on timestamp. Activation Rule: `upgradeTime != null && span_start.l1_origin.timestamp >= upgradeTime` `span_start.l1_origin.timestamp` is the L1 origin block timestamp of the first block in the span batch. This rule ensures that every chain activity regarding this span batch is done after the hard fork. i.e. Every block in the span is created, submitted to the L1, and derived from the L1 after the hard fork. ### Optimization Strategies #### Truncating information and storing only necessary data The following fields stores truncated data: * `rel_timestamp`: We can save two bytes by storing `rel_timestamp` instead of the full `span_start.timestamp`. * `parent_check` and `l1_origin_check`: We can save twelve bytes by truncating twelve bytes from the full hash, while having enough safety. #### `tx_data_headers` removal from initial specs We do not need to store length per each `tx_datas` elements even if those are variable length, because the elements itself is RLP encoded, containing their length in RLP prefix. #### `Chain ID` removal from initial specs Every transaction has chain id. We do not need to include chain id in span batch because L2 already knows its chain id, and use its own value for processing span batches while derivation. #### Reorganization of constant length transaction fields `signature`, `nonce`, `gaslimit`, `to` field are constant size, so these were split up completely and are grouped into individual arrays. This adds more complexity, but organizes data for improved compression by grouping data with similar data pattern. #### RLP encoding for only variable length fields Further size optimization can be done by packing variable length fields, such as `access_list`. However, doing this will introduce much more code complexity, compared to benefiting from size reduction. Our goal is to find the sweet spot on code complexity - span batch size tradeoff. I decided that using RLP for all variable length fields will be the best option, not risking codebase with gnarly custom encoding/decoding implementations. #### Store `y_parity` and `protected_bit` instead of `v` Only legacy type transactions can be optionally protected. If protected([EIP-155]), `v = 2 * ChainID + 35 + y_parity`. Else, `v = 27 + y_parity`. For other types of transactions, `v = y_parity`. We store `y_parity`, which is single bit per L2 transaction. We store `protected_bit`, which is single bit per L2 legacy type transactions to indicate that tx is protected. This optimization will benefit more when ratio between number of legacy type transactions over number of transactions excluding deposit tx is higher. Deposit transactions are excluded in batches and are never written at L1 so excluded while analyzing. #### Adjust `txs` Data Layout for Better Compression There are (8 choose 2) \* 6! = 20160 permutations of ordering fields of `txs`. It is not 8! because `contract_creation_bits` must be first decoded in order to decode `tx_tos`. We experimented with different data layouts and found that segregating random data (`tx_sigs`, `tx_tos`, `tx_datas`) from the rest most improved the zlib compression ratio. #### `fee_recipients` Encoding Scheme Let `K` := number of unique fee recipients(cardinality) per span batch. Let `N` := number of L2 blocks. If we naively encode each fee recipients by concatenating every fee recipients, it will need `20 * N` bytes. If we manage `fee_recipients_idxs` and `fee_recipients_set`, It will need at most `max uvarint size * N = 8 * N`, `20 * K` bytes each. If `20 * N > 8 * N + 20 * K` then maintaining an index of fee recipients is reduces the size. we thought sequencer rotation happens not much often, so assumed that `K` will be much lesser than `N`. The assumption makes upper inequality to hold. Therefore, we decided to manage `fee_recipients_idxs` and `fee_recipients_set` separately. This adds complexity but reduces data. ### How Derivation works with Span Batches * Block Timestamp * The first L2 block's block timestamp is `rel_timestamp + L2Genesis.Timestamp`. * Then we can derive other blocks timestamp by adding L2 block time for each. * L1 Origin Number * The parent of the first L2 block's L1 origin number is `l1_origin_num - sum(origin_bits)` * Then we can derive other blocks' L1 origin number with `origin_bits` * `i-th block's L1 origin number = (i-1)th block's L1 origin number + (origin_bits[i] ? 1 : 0)` * L1 Origin Hash * We only need the `l1_origin_check`, the truncated L1 origin hash of the last L2 block of Span Batch. * If the last block references canonical L1 chain as its origin, we can ensure the all other blocks' origins are consistent with the canonical L1 chain. * Parent hash * In V0 Batch spec, we need batch's parent hash to validate if batch's parent is consistent with current L2 safe head. * But in the case of Span Batch, because it contains consecutive L2 blocks in the span, we do not need to validate all blocks' parent hash except the first block. * Transactions * Deposit transactions can be derived from its L1 origin, identical with V0 batch. * User transactions can be derived by following way: * Recover `V` value of TX signature from `y_parity_bits` and L2 chain id, as described in optimization strategies. * When parsing `tx_tos`, `contract_creation_bits` is used to determine if the TX has `to` value or not. ### Integration #### Channel Reader (Batch Decoding) The Channel Reader decodes the span-batch, as described in the [span-batch format](#span-batch-format). A set of derived attributes is computed as described above. Then cached with the decoded result: #### Batch Queue A span-batch is buffered as a singular large batch, by its starting timestamp (transformed `rel_timestamp`). Span-batches share the same queue with v0 batches: batches are processed in L1 inclusion order. A set of modified validation rules apply to the span-batches. Rules are enforced with the [contextual definitions][batch-queue] as v0-batch validation: `epoch`, `inclusion_block_number`, `next_timestamp` Definitions: * `batch` as defined in the [Span batch format section][span-batch-format]. * `prev_l2_block` is the L2 block from the current safe chain, whose timestamp is at `span_start.timestamp - l2_block_time` Span-batch rules, in validation order: * `batch_origin` is determined like with singular batches: * `batch.epoch_num == epoch.number+1`: * If `next_epoch` is not known -> `undecided`: i.e. a batch that changes the L1 origin cannot be processed until we have the L1 origin data. * If known, then define `batch_origin` as `next_epoch` * `batch_origin.timestamp < span_batch_upgrade_timestamp` -> `drop`: i.e. enforce the [span batch upgrade activation rule](#span-batch-activation-rule). * `span_start.timestamp > next_timestamp` -> `future`: i.e. the batch must be ready to process, but does not have to start exactly at the `next_timestamp`, since it can overlap with previously processed blocks, * `span_end.timestamp < next_timestamp` -> `drop`: i.e. the batch must have at least one new block to process. * If there's no `prev_l2_block` in the current safe chain -> `drop`: i.e. the timestamp must be aligned. * `batch.parent_check != prev_l2_block.hash[:20]` -> `drop`: i.e. the checked part of the parent hash must be equal to the same part of the corresponding L2 block hash. * Sequencing-window checks: * Note: The sequencing window is enforced for the *batch as a whole*: if the batch was partially invalid instead, it would drop the oldest L2 blocks, which makes the later L2 blocks invalid. * Variables: * `origin_changed_bit = origin_bits[0]`: `true` if the first L2 block changed its L1 origin, `false` otherwise. * `start_epoch_num = batch.l1_origin_num - sum(origin_bits) + (origin_changed_bit ? 1 : 0)` * `end_epoch_num = batch.l1_origin_num` * Rules: * `start_epoch_num + sequence_window_size < inclusion_block_number` -> `drop`: i.e. the batch must be included timely. * `start_epoch_num > prev_l2_block.l1_origin.number + 1` -> `drop`: i.e. the L1 origin cannot change by more than one L1 block per L2 block. * If `batch.l1_origin_check` does not match the canonical L1 chain at `end_epoch_num` -> `drop`: verify the batch is intended for this L1 chain. * After upper `l1_origin_check` check is passed, we don't need to check if the origin is past `inclusion_block_number` because of the following invariant. * Invariant: the epoch-num in the batch is always less than the inclusion block number, if and only if the L1 epoch hash is correct. * `start_epoch_num < prev_l2_block.l1_origin.number` -> `drop`: epoch number cannot be older than the origin of parent block * Max Sequencer time-drift & other L1 origin checks: * Note: The max time-drift is enforced for the *batch as a whole*, to keep the possible output variants small. * Variables: * `block_input`: an L2 block from the span-batch, with L1 origin as derived from the `origin_bits` and now established canonical L1 chain. * `next_epoch`: `block_input.origin`'s next L1 block. It may reach to the next origin outside the L1 origins of the span. * Rules: * For each `block_input` whose timestamp is greater than `safe_head.timestamp`: * `block_input.l1_origin.number < safe_head.l1_origin.number` -> `drop`: enforce increasing L1 origins. * `block_input.timestamp < block_input.origin.time` -> `drop`: enforce the min L2 timestamp rule. * `block_input.timestamp > block_input.origin.time + max_sequencer_drift`: enforce the L2 timestamp drift rule, but with exceptions to preserve above min L2 timestamp invariant: * `len(block_input.transactions) == 0`: * `origin_bits[i] == 0`: `i` is the index of `block_input` in the span batch. So this implies the block\_input did not advance the L1 origin, and must thus be checked against `next_epoch`. * If `next_epoch` is not known -> `undecided`: without the next L1 origin we cannot yet determine if time invariant could have been kept. * If `block_input.timestamp >= next_epoch.time` -> `drop`: the batch could have adopted the next L1 origin without breaking the `L2 time >= L1 time` invariant. * `len(block_input.transactions) > 0`: -> `drop`: when exceeding the sequencer time drift, never allow the sequencer to include transactions. * And for all transactions: * `drop` if the `batch.tx_datas` list contains a transaction that is invalid or derived by other means exclusively: * any transaction that is empty (zero length `tx_data`) * any [deposited transactions][g-deposit-tx-type] (identified by the transaction type prefix byte in `tx_data`) * any transaction of a future type > 2 (note that [Isthmus adds support](../isthmus/derivation.md#activation) for `SetCode` transactions of type 4) * Overlapped blocks checks: * Note: If the span batch overlaps the current L2 safe chain, we must validate all overlapped blocks. * Variables: * `block_input`: an L2 block derived from the span-batch. * `safe_block`: an L2 block from the current L2 safe chain, at same timestamp as `block_input` * Rules: * For each `block_input`, whose timestamp is less than `next_timestamp`: * `block_input.l1_origin.number != safe_block.l1_origin.number` -> `drop` * `block_input.transactions != safe_block.transactions` -> `drop` * compare excluding deposit transactions Once validated, the batch-queue then emits a block-input for each of the blocks included in the span-batch. The next derivation stage is thus only aware of individual block inputs, similar to the previous V0 batch, although not strictly a "v0 batch" anymore. #### Batcher Instead of transforming L2 blocks into batches, the blocks should be buffered to form a span-batch. Ideally the L2 blocks are buffered as block-inputs, to maximize the span of blocks covered by the span-batch: span-batches of single L2 blocks do not increase efficiency as much as with larger spans. This means that the `(c *channelBuilder) AddBlock` function is changed to not directly call `(co *ChannelOut) AddBatch` but defer that until a minimum number of blocks have been buffered. Output-size estimation of the queued up blocks is not possible until the span-batch is written to the channel. Past a given number of blocks, the channel may be written for estimation, and then re-written if more blocks arrive. The [batcher functionality][batcher] stays the same otherwise: unsafe blocks are transformed into batches, encoded in compressed channels, and then split into frames for submission to L1. Batcher implementations can implement different heuristics and re-attempts to build the most gas-efficient data-txs. ## Canyon ### Activation Timestamps | Network | Activation timestamp | | --------- | -------------------------------------- | | `mainnet` | `1704992401` (2024-01-11 17:00:01 UTC) | | `sepolia` | `1699981200` (2023-11-14 17:00:00 UTC) | [eip3651]: https://eips.ethereum.org/EIPS/eip-3651 [eip3855]: https://eips.ethereum.org/EIPS/eip-3855 [eip3860]: https://eips.ethereum.org/EIPS/eip-3860 [eip4895]: https://eips.ethereum.org/EIPS/eip-4895 [eip6049]: https://eips.ethereum.org/EIPS/eip-6049 [block-validation]: ../../protocol/consensus/p2p.md#block-validation [payload-attributes]: ../../protocol/consensus/derivation.md#building-individual-payload-attributes [1559-params]: ../../protocol/execution/index.md#1559-parameters [channel-reading]: ../../protocol/consensus/derivation.md#reading [deposit-reading]: ../../protocol/bridging/deposits.md#deposit-receipt [create2deployer]: ../../protocol/execution/evm/predeploys.md#create2deployer The Canyon upgrade contains the Shapella upgrade from L1 and some minor protocol fixes. The Canyon upgrade uses a *L2 block-timestamp* activation-rule, and is specified in both the rollup-node (`canyon_time`) and execution engine (`config.canyonTime`). Shanghai time in the execution engine should be set to the same time as the Canyon time. ### Execution Layer * Shapella Upgrade * [EIP-3651: Warm COINBASE][eip3651] * [EIP-3855: PUSH0 instruction][eip3855] * [EIP-3860: Limit and meter initcode][eip3860] * [EIP-4895: Beacon chain push withdrawals as operations][eip4895] * [Withdrawals are prohibited in P2P Blocks][block-validation] * [Withdrawals should be set to the empty array with Canyon][payload-attributes] * [EIP-6049: Deprecate SELFDESTRUCT][eip6049] * [Modifies the EIP-1559 Denominator][1559-params] * [Adds the deposit nonce & deposit nonce version to the deposit receipt hash][deposit-reading] * [Deploys the create2Deployer to `0x13b0D85CcB8bf860b6b79AF3029fCA081AE9beF2`][create2deployer] ### Consensus Layer * [Channel Ordering Fix][channel-reading] ## Configuration There are four categories of Base configuration: * **Consensus Parameters**: Fixed at genesis or changeable through privileged accounts or protocol upgrades. * **Policy Parameters**: Changeable without breaking consensus, within protocol-imposed constraints. * **Admin Roles**: Accounts that can upgrade contracts, change role owners, or update protocol parameters. Typically cold/multisig wallets. * **Service Roles**: Accounts used for day-to-day operations. Typically hot wallets. ### Consensus Parameters | Parameter | Description | Administrator | | ----------------------------------------------------------------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ----------------------------------- | | [Batch Inbox Address](glossary.md#batch-inbox) | L1 address where [batcher transactions](glossary.md#batcher-transaction) are posted | Static | | [Batcher Hash](glossary.md#batcher-hash) | Versioned hash of the authorized batcher sender(s) | [System Config Owner](#admin-roles) | | Chain ID | Unique chain ID for transaction signature validation | Static | | [Proof Maturity Delay](../protocol/fault-proof/stage-one/bridge-integration.md#fpac-optimismportal-mods-specification) | Time between proving and finalizing a withdrawal. 7 days. | [L1 Proxy Admin](#admin-roles) | | [Dispute Game Finality](../protocol/fault-proof/stage-one/bridge-integration.md#fpac-optimismportal-mods-specification) | Time for `Guardian` to [blacklist a game](../protocol/fault-proof/stage-one/bridge-integration.md#blacklisting-disputegames) before withdrawals finalize. 3.5 days. | [L1 Proxy Admin](#admin-roles) | | [Respected Game Type](../protocol/fault-proof/stage-one/bridge-integration.md#new-state-variables) | Game type `OptimismPortal` accepts for withdrawal finalization. `CANNON` (`0`); may fall back to `PERMISSIONED_CANNON` (`1`). | [Guardian](#service-roles) | | [Fault Game Max Depth](../protocol/fault-proof/stage-one/fault-dispute-game.md#game-tree) | Maximum depth of fault dispute game trees. 73. | Static | | [Fault Game Split Depth](../protocol/fault-proof/stage-one/fault-dispute-game.md#game-tree) | Depth after which claims correspond to VM state commitments. 30. | Static | | [Max Game Clock Duration](../protocol/fault-proof/stage-one/fault-dispute-game.md#max_clock_duration) | Maximum time on a dispute game team's chess clock. 3.5 days. | Static | | [Game Clock Extension](../protocol/fault-proof/stage-one/fault-dispute-game.md#clock_extension) | Clock credit when a team's remaining time falls below `CLOCK_EXTENSION`. 3 hours. | Static | | [Bond Withdrawal Delay](../protocol/fault-proof/stage-one/bond-incentives.md#delay-period) | Time before dispute game bonds can be withdrawn. 7 days. | Static | | [Min Large Preimage Size](../protocol/fault-proof/stage-one/fault-dispute-game.md#preimageoracle-interaction) | Minimum preimage size for the `PreimageOracle` large proposal process. 126,000 bytes. | Static | | [Large Preimage Challenge Period](../protocol/fault-proof/stage-one/fault-dispute-game.md#preimageoracle-interaction) | Challenge window before large preimage proposals are published. 24 hours. | Static | | [Fault Game Absolute Prestate](../protocol/fault-proof/stage-one/fault-dispute-game.md#execution-trace) | VM state commitment used as the fault proof VM starting point | Static | | [Fault Game Genesis Block](../protocol/fault-proof/stage-one/fault-dispute-game.md#anchor-state) | Initial [anchor state](../protocol/fault-proof/stage-one/fault-dispute-game.md#anchor-state) block number. Any finalized block between bedrock and fault proof activation; `0` from genesis. | Static | | [Fault Game Genesis Output Root](../protocol/fault-proof/stage-one/fault-dispute-game.md#anchor-state) | Output root at the Fault Game Genesis Block | Static | | [Fee Scalar](glossary.md#fee-scalars) | Markup on transactions relative to raw L1 data cost. Fee margin between 0%–50%. | [System Config Owner](#admin-roles) | | [Gas Limit](../protocol/consensus/derivation.md#system-configuration) | L2 block gas limit. ≤ 200,000,000 gas. | [System Config Owner](#admin-roles) | | [Genesis State](../protocol/execution/evm/predeploys.md#overview) | Initial chain state including all predeploy code and storage. Standard predeploys and preinstalls only. | Static | | L2 Block Time | Interval at which L2 blocks are produced via [derivation](../protocol/consensus/derivation.md). 1 or 2 seconds. | [L1 Proxy Admin](#admin-roles) | | [Sequencing Window Size](glossary.md#sequencing-window) | Max batch submission gap before L1 fallback triggers. 3,600 L1 blocks (12 hours at 12s L1 block time). | Static | | Start Block | L1 block where `SystemConfig` was first initialized | [L1 Proxy Admin](#admin-roles) | | Superchain Target | `SuperchainConfig` and `ProtocolVersions` addresses for cross-L2 config. Mainnet or Sepolia. | Static | | Governance Token | OP governance token. Disabled. | n/a | | [Operator Fee Params](../upgrades/isthmus/exec-engine.md#operator-fee) | Operator fee scalar and constant for fee calculation. Standard values are 0; non-zero for non-standard configurations such as op-succinct. | [System Config Owner](#admin-roles) | | [DA Footprint Gas Scalar](../upgrades/jovian/exec-engine.md#DA-footprint-block-limit) | Scalar for DA footprint calculation | [System Config Owner](#admin-roles) | | [Minimum Base Fee](../upgrades/jovian/exec-engine.md#minimum-base-fee) | Minimum base fee on L2 | [System Config Owner](#admin-roles) | ### Policy Parameters | Parameter | Description | Administrator | | ---------------------------------------------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------- | --------------------------------- | | [Data Availability Type](glossary.md#data-availability-provider) | Whether the batcher posts data as blobs or calldata. Ethereum (Blobs or Calldata); Alt-DA not supported. | [Batch Submitter](#service-roles) | | Batch Submission Frequency | Frequency of [batcher transaction](glossary.md#batcher-transaction) submissions to L1. ≤ 1,800 L1 blocks (6 hours at 12s L1 block time). | [Batch Submitter](#service-roles) | | Output Frequency | Frequency of output root submissions to L1. ≤ 43,200 L2 blocks (24 hours at 2s L2 block time); must be non-zero. Deprecated once fault proofs are enabled. | [L1 Proxy Admin](#admin-roles) | ### Admin Roles | Role | Description | Administers | | ------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------ | | L1 Proxy Admin | `ProxyAdmin` from the latest `op-contracts` release, authorized to upgrade L1 contracts | L1 contracts | | L1 ProxyAdmin Owner | Authorized to update the L1 Proxy Admin. [0x5a0Aae59D09fccBdDb6C6CcEB07B7279367C3d2A](https://etherscan.io/address/0x5a0Aae59D09fccBdDb6C6CcEB07B7279367C3d2A) | [L1 Proxy Admin](#admin-roles) | | L2 Proxy Admin | `ProxyAdmin` at `0x4200000000000000000000000000000000000018`, authorized to upgrade L2 contracts | [Predeploys](../protocol/execution/evm/predeploys.md#overview) | | L2 ProxyAdmin Owner | [Aliased](glossary.md#address-aliasing) L1 ProxyAdmin Owner; upgrades L2 contracts via `ProxyAdmin`. [0x6B1BAE59D09fCcbdDB6C6cceb07B7279367C4E3b](https://optimistic.etherscan.io/address/0x6B1BAE59D09fCcbdDB6C6cceb07B7279367C4E3b) | [L2 Proxy Admin](#admin-roles) | | [System Config Owner](../protocol/consensus/derivation.md#system-configuration) | Authorized to change values in the `SystemConfig` contract | [Batch Submitter](#service-roles), [Sequencer P2P Signer](#service-roles), Fee Scalar, Gas Limit | ### Service Roles | Role | Description | Administrator | | --------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ----------------------------------- | | [Batch Submitter](glossary.md#batcher) | Authenticates batches submitted to L1 | [System Config Owner](#admin-roles) | | [Challenger](../protocol/fault-proof/stage-one/bridge-integration.md#permissioned-faultdisputegame) | Interacts with permissioned dispute games. Active only when respected game type is `PERMISSIONED_CANNON`. [0x9BA6e03D8B90dE867373Db8cF1A58d2F7F006b3A](https://etherscan.io/address/0x9BA6e03D8B90dE867373Db8cF1A58d2F7F006b3A) | [L1 Proxy Admin](#admin-roles) | | Guardian | Pauses L1 withdrawals, blacklists dispute games, sets respected game type in `OptimismPortal`. [0x09f7150D8c019BeF34450d6920f6B3608ceFdAf2](https://etherscan.io/address/0x09f7150D8c019BeF34450d6920f6B3608ceFdAf2) | [L1 Proxy Admin](#admin-roles) | | [Proposer](../protocol/fault-proof/stage-one/bridge-integration.md#permissioned-faultdisputegame) | Creates permissioned dispute games on L1. Active only when respected game type is `PERMISSIONED_CANNON`. | [L1 Proxy Admin](#admin-roles) | | [Sequencer P2P Signer](glossary.md#unsafe-block-signer) | Signs unsafe/pre-submitted blocks at the P2P layer | [System Config Owner](#admin-roles) | ## Glossary ### General Terms #### Layer 1 (L1) [L1]: glossary.md#layer-1-L1 Refers the Ethereum blockchain, used in contrast to [layer 2][L2], which refers to Base. #### Layer 2 (L2) [L2]: glossary.md#layer-2-L2 Refers to Base Chain (specified in this repository), used in contrast to [layer 1][L1], which refers to the Ethereum blockchain. #### Block [block]: glossary.md#block Can refer to an [L1] block, or to an [L2] block, which are structured similarly. A block is a sequential list of transactions, along with a couple of properties stored in the *header* of the block. A description of these properties can be found in code comments [here][nano-header], or in the [Ethereum yellow paper (pdf)][yellow], section 4.3. It is useful to distinguish between input block properties, which are known before executing the transactions in the block, and output block properties, which are derived after executing the block's transactions. These include various [Merkle Patricia Trie roots][mpt] that notably commit to the L2 state and to the log events emitted during execution. #### EOA [EOA]: glossary.md#EOA "Externally Owned Account", an Ethereum term to designate addresses operated by users, as opposed to contract addresses. #### Merkle Patricia Trie [mpt]: glossary.md#merkle-patricia-trie A [Merkle Patricia Trie (MPT)][mpt-details] is a sparse trie, which is a tree-like structure that maps keys to values. The root hash of a MPT is a commitment to the contents of the tree, which allows a proof to be constructed for any key-value mapping encoded in the tree. Such a proof is called a Merkle proof, and can be verified against the Merkle root. #### Chain Re-Organization [reorg]: glossary.md#chain-re-organization A re-organization, or re-org for short, is whenever the head of a blockchain (its last block) changes (as dictated by the [fork choice rule][fork-choice-rule]) to a block that is not a child of the previous head. L1 re-orgs can happen because of network conditions or attacks. L2 re-orgs are a consequence of L1 re-orgs, mediated via [L2 chain derivation][derivation]. #### Predeployed Contract ("Predeploy") [predeploy]: glossary.md#predeployed-contract-predeploy A contract placed in the L2 genesis state (i.e. at the start of the chain). All predeploy contracts are specified in the [predeploys specification](../protocol/execution/evm/predeploys.md). #### Preinstalled Contract ("Preinstall") [preinstall]: glossary.md#preinstalled-contract-preinstall A contract placed in the L2 genesis state (i.e. at the start of the chain). These contracts do not share the same security guarantees as [predeploys](#predeployed-contract-predeploy), but are general use contracts made available to improve the L2's UX. All preinstall contracts are specified in the [preinstalls specification](../protocol/execution/evm/preinstalls.md). #### Precompiled Contract ("Precompile") [precompile]: glossary.md#precompiled-contract-precompile A contract implemented natively in the EVM that performs a specific operation more efficiently than a bytecode (e.g. Solidity) implementation. Precompiles exist at predefined addresses. They are created and modified through network upgrades. All precompile contracts are specified in the [precompiles specification](../protocol/execution/evm/precompiles.md). #### Receipt [receipt]: glossary.md#receipt A receipt is an output generated by a transaction, comprising a status code, the amount of gas used, a list of log entries, and a [bloom filter] indexing these entries. Log entries are most notably used to encode [Solidity events]. Receipts are not stored in blocks, but blocks store a [Merkle Patricia Trie root][mpt] for a tree containing the receipt for every transaction in the block. Receipts are specified in the [yellow paper (pdf)][yellow] section 4.3.1. #### Transaction Type [transaction-type]: glossary.md#transaction-type Ethereum provides a mechanism (as described in [EIP-2718]) for defining different transaction types. Different transaction types can contain different payloads, and be handled differently by the protocol. [EIP-2718]: https://eips.ethereum.org/EIPS/eip-2718 #### Fork Choice Rule [fork-choice-rule]: glossary.md#fork-choice-rule The fork choice rule is the rule used to determine which block is to be considered as the head of a blockchain. On L1, this is determined by the proof of stake rules. L2 also has a fork choice rule, although the rules vary depending on whether we want the [safe L2 head][safe-l2-head], the [unsafe L2 head][unsafe-l2-head] or the [finalized L2 head][finalized-l2-head]. #### Priority Gas Auction Transactions in ethereum are ordered by the price that the transaction pays to the miner. Priority Gas Auctions (PGAs) occur when multiple parties are competing to be the first transaction in a block. Each party continuously updates the gas price of their transaction. PGAs occur when there is value in submitting a transaction before other parties (like being the first deposit or submitting a deposit before there is not more guaranteed gas remaining). PGAs tend to have negative externalities on the network due to a large amount of transactions being submitted in a very short amount of time. ### Sequencing [sequencing]: glossary.md#sequencing Transactions in the rollup can be included in two ways: * Through a [deposited transaction](#deposited-transaction), enforced by the system * Through a regular transaction, embedded in a [sequencer batch](#sequencer-batch) Submitting transactions for inclusion in a batch saves costs by reducing overhead, and enables the sequencer to pre-confirm the transactions before the L1 confirms the data. #### Sequencer [sequencer]: glossary.md#sequencer A sequencer is either a [rollup node][rollup-node] ran in sequencer mode, or the operator of this rollup node. The sequencer is a privileged actor, which receives L2 transactions from L2 users, creates L2 blocks using them, which it then submits to [data availability provider][avail-provider] (via a [batcher]). It also submits [output roots][l2-output] to L1. #### Sequencing Window [sequencing-window]: glossary.md#sequencing-window A sequencing window is a range of L1 blocks from which a [sequencing epoch][sequencing-epoch] can be derived. A sequencing window whose first L1 block has number `N` contains [batcher transactions][batcher-transaction] for epoch `N`. The window contains blocks `[N, N + SWS)` where `SWS` is the sequencer window size. The current default `sws` is 3600 epochs. Additionally, the first block in the window defines the [depositing transactions][depositing-tx] which determine the [deposits] to be included in the first L2 block of the epoch. #### Sequencing Epoch [sequencing-epoch]: glossary.md#sequencing-epoch A sequencing epoch is sequential range of L2 blocks derived from a [sequencing window](#sequencing-window) of L1 blocks. Each epoch is identified by an epoch number, which is equal to the block number of the first L1 block in the sequencing window. Epochs can have variable size, subject to some constraints. See the [L2 chain derivation specification][derivation-spec] for more details. #### L1 Origin [l1-origin]: glossary.md#l1-origin The L1 origin of an L2 block is the L1 block corresponding to its [sequencing epoch][sequencing-epoch]. ### Deposits [deposits]: glossary.md#deposits In general, a deposit is an L2 transaction derived from an L1 block (by the [rollup driver]). While transaction deposits are notably (but not only) used to "deposit" (bridge) ETH and tokens to L2, the word *deposit* should be understood as "a transaction *deposited* to L2 from L1". This term *deposit* is somewhat ambiguous as these "transactions" exist at multiple levels. This section disambiguates all deposit-related terms. Notably, a *deposit* can refer to: * A [deposited transaction][deposited] (on L2) that is part of a deposit block. * A [depositing call][depositing-call] that causes a [deposited transaction][deposited] to be derived. * The event/log data generated by the [depositing call][depositing-call], which is what the [rollup driver] reads to derive the [deposited transaction][deposited]. We sometimes also talk about *user deposit* which is a similar term that explicitly excludes [L1 attributes deposited transactions][l1-attr-deposit]. Deposits are specified in the [deposits specification][deposits-spec]. #### Deposited Transaction [deposited]: glossary.md#deposited-transaction A *deposited transaction* is a L2 transaction that was derived from L1 and included in a L2 block. There are two kinds of deposited transactions: * [L1 attributes deposited transaction][l1-attr-deposit], which submits the L1 block's attributes to the [L1 Attributes Predeployed Contract][l1-attr-predeploy]. * [User-deposited transactions][user-deposited], which are transactions derived from an L1 call to the [deposit contract][deposit-contract]. #### L1 Attributes Deposited Transaction [l1-attr-deposit]: glossary.md#l1-attributes-deposited-transaction An *L1 attributes deposited transaction* is [deposited transaction][deposited] that is used to register the L1 block attributes (number, timestamp, ...) on L2 via a call to the [L1 Attributes Predeployed Contract][l1-attr-predeploy]. That contract can then be used to read the attributes of the L1 block corresponding to the current L2 block. L1 attributes deposited transactions are specified in the [L1 Attributes Deposit][l1-attributes-tx-spec] section of the deposits specification. [l1-attributes-tx-spec]: ../protocol/bridging/deposits.md#l1-attributes-deposited-transaction #### User-Deposited Transaction [user-deposited]: glossary.md#user-deposited-transaction A *user-deposited transaction* is a [deposited transaction][deposited] which is derived from an L1 call to the [deposit contract][deposit-contract] (a [depositing call][depositing-call]). User-deposited transactions are specified in the [Transaction Deposits][tx-deposits-spec] section of the deposits specification. [tx-deposits-spec]: ../protocol/bridging/deposits.md#user-deposited-transactions #### Depositing Call [depositing-call]: glossary.md#depositing-call A *depositing call* is an L1 call to the [deposit contract][deposit-contract], which will be derived to a [user-deposited transaction][user-deposited] by the [rollup driver]. This call specifies all the data (destination, value, calldata, ...) for the deposited transaction. #### Depositing Transaction [depositing-tx]: glossary.md#depositing-transaction A *depositing transaction* is an L1 transaction that makes one or more [depositing calls][depositing-call]. #### Depositor [depositor]: glossary.md#depositor The *depositor* is the L1 account (contract or [EOA]) that makes (is the `msg.sender` of) the [depositing call][depositing-call]. The *depositor* is **NOT** the originator of the depositing transaction (i.e. `tx.origin`). #### Deposited Transaction Type [deposit-tx-type]: glossary.md#deposited-transaction-type The *deposited transaction type* is an [EIP-2718] [transaction type][transaction-type], which specifies the input fields and correct handling of a [deposited transaction][deposited]. See the [corresponding section][spec-deposit-tx-type] of the deposits spec for more information. [spec-deposit-tx-type]: ../protocol/bridging/deposits.md#the-deposited-transaction-type #### Deposit Contract [deposit-contract]: glossary.md#deposit-contract The *deposit contract* is an [L1] contract to which [EOAs][EOA] and contracts may send [deposits]. The deposits are emitted as log records (in Solidity, these are called *events*) for consumption by [rollup nodes][rollup-node]. Advanced note: the deposits are not stored in calldata because they can be sent by contracts, in which case the calldata is part of the *internal* execution between contracts, and this intermediate calldata is not captured in one of the [Merkle Patricia Trie roots][mpt] included in the L1 block. cf. [Deposits Specification][deposits-spec] ### Withdrawals > **TODO** expand this whole section to be clearer [withdrawals]: glossary.md#withdrawals In general, a withdrawal is a transaction sent from L2 to L1 that may transfer data and/or value. The term *withdrawal* is somewhat ambiguous as these "transactions" exist at multiple levels. In order to differentiate between the L1 and L2 components of a withdrawal we introduce the following terms: * A *withdrawal initiating transaction* refers specifically to a transaction on L2 sent to the Withdrawals predeploy. * A *withdrawal finalizing transaction* refers specifically to an L1 transaction which finalizes and relays the withdrawal. #### Relayer [relayer]: glossary.md#withdrawals An EOA on L1 which finalizes a withdrawal by submitting the data necessary to verify its inclusion on L2. #### Finalization Period [finalization-period]: glossary.md#finalization-period The finalization period — sometimes also called *withdrawal delay* — is the minimum amount of time (in seconds) that must elapse before a [withdrawal][withdrawals] can be finalized. The finalization period is necessary to afford sufficient time for [validators][validator] to make a [fault proof][fault-proof]. > **TODO** specify current value for finalization period ### Configuration #### Batch Inbox [batch-inbox]: glossary.md#batch-inbox The **Batch Inbox** is the address that Sequencer transaction batches are published to. Sequencers publish transactions to the Batch Inbox by setting it as the `to` address on a transaction containing batched L2 transactions either in calldata or as blobdata. #### Batcher Hash [batcher-hash]: glossary.md#batcher-hash The **Batcher Hash** identifies the sender(s) whose transactions to the [Batch Inbox](#batch-inbox) will be recognized by the L2 clients for a given Base chain. The Batcher Hash is versioned by the first byte of the hash. The structure of the V0 Batcher Hash is a 32 byte hash defined as follows: | 1 byte | 11 bytes | 20 bytes | | -------------- | -------- | -------- | | version (0x00) | empty | address | This can also be understood as: ```solidity bytes32(address(batcher)) ``` Where `batcher` is the address of the account that sends transactions to the Batch Inbox. Put simply, the V0 hash identifies a *single* address whose transaction batches will be recognized by L2 clients. This hash is versioned so that it could, for instance, be repurposed to be a commitment to a list of permitted accounts or some other form of batcher identification. #### Fee Scalars [fee-scalars]: glossary.md#fee-scalars The **Fee Scalars** are parameters used to calculate the L1 data fee for L2 transactions. These parameters are also known as Gas Price Oracle (GPO) parameters. ##### Pre-Ecotone Parameters Before the Ecotone upgrade, these include: * **Scalar**: A multiplier applied to the L1 base fee, interpreted as a big-endian `uint256` * **Overhead**: A constant gas overhead, interpreted as a big-endian `uint256` ##### Post-Ecotone Parameters After the Ecotone upgrade: * The **Scalar** attribute encodes additional scalar information in a versioned encoding scheme * The **Overhead** value is ignored and does not affect the L2 state-transition output ##### Post-Ecotone Scalar Encoding The Scalar is encoded as big-endian `uint256`, interpreted as `bytes32`, and composed as follows: * Byte `0`: scalar-version byte * Bytes `[1, 32)`: depending on scalar-version: * Scalar-version `0`: * Bytes `[1, 28)`: padding, should be zero * Bytes `[28, 32)`: big-endian `uint32`, encoding the L1-fee `baseFeeScalar` * This version implies the L1-fee `blobBaseFeeScalar` is set to 0 * If there are non-zero bytes in the padding area, `baseFeeScalar` must be set to MaxUint32 * Scalar-version `1`: * Bytes `[1, 24)`: padding, must be zero * Bytes `[24, 28)`: big-endian `uint32`, encoding the `blobBaseFeeScalar` * Bytes `[28, 32)`: big-endian `uint32`, encoding the `baseFeeScalar` The `baseFeeScalar` corresponds to the share of the user-transaction (per byte) in the total regular L1 EVM gas usage consumed by the data-transaction of the batch-submitter. For blob transactions, this is the fixed intrinsic gas cost of the L1 transaction. The `blobBaseFeeScalar` corresponds to the share of a user-transaction (per byte) in the total blobdata that is introduced by the data-transaction of the batch-submitter. #### Unsafe Block Signer [unsafe-block-signer]: glossary.md#unsafe-block-signer The **Unsafe Block Signer** is an Ethereum address whose corresponding private key is used to sign "unsafe" blocks before they are published to L1. This signature allows nodes in the P2P network to recognize these blocks as the canonical unsafe blocks, preventing denial of service attacks on the P2P layer. To ensure that its value can be fetched with a storage proof in a storage layout independent manner, it is stored at a special storage slot corresponding to `keccak256("systemconfig.unsafeblocksigner")`. Unlike other system config parameters, the Unsafe Block Signer only operates on blockchain policy and is not a consensus level parameter. #### L2 Gas Limit [l2-gas-limit]: glossary.md#l2-gas-limit The **L2 Gas Limit** defines the maximum amount of gas that can be used in a single L2 block. This parameter ensures that L2 blocks remain of reasonable size to be processed and proven. Changes to the L2 gas limit are fully applied in the first L2 block with the L1 origin that introduced the change. The gas limit may not be set to a value larger than the [maximum gas limit](../protocol/consensus/derivation.md#system-configuration). This is to ensure that L2 blocks are provable and can be processed by consensus and execution software. ### Batch Submission [batch-submission]: glossary.md#batch-submission #### Data Availability [data-availability]: glossary.md#data-availability Data availability is the guarantee that some data will be "available" (i.e. *retrievable*) during a reasonably long time window. In Base's case, the data in question are [sequencer batches][sequencer-batch] that [validators][validator] need in order to verify the sequencer's work and validate the L2 chain. The [finalization period][finalization-period] should be taken as the lower bound on the availability window, since that is when data availability is the most crucial, as it is needed to perform a [fault proof][fault-proof]. "Availability" **does not** mean guaranteed long-term storage of the data. #### Data Availability Provider [avail-provider]: glossary.md#data-availability-provider A data availability provider is a service that can be used to make data available. See the [Data Availability][data-availability] for more information on what this means. Ideally, a good data availability provider provides strong *verifiable* guarantees of data availability At present, the supported data availability providers include Ethereum call data and blob data. #### Sequencer Batch [sequencer-batch]: glossary.md#sequencer-batch A sequencer batch is list of L2 transactions (that were submitted to a sequencer) tagged with an [epoch number](#sequencing-epoch) and an L2 block timestamp (which can trivially be converted to a block number, given our block time is constant). Sequencer batches are part of the [L2 derivation inputs][deriv-inputs]. Each batch represents the inputs needed to build **one** L2 block (given the existing L2 chain state) — except for the first block of each epoch, which also needs information about deposits (cf. the section on [L2 derivation inputs][deriv-inputs]). #### Channel [channel]: glossary.md#channel A channel is a sequence of [sequencer batches][sequencer-batch] (for sequential blocks) compressed together. The reason to group multiple batches together is simply to obtain a better compression rate, hence reducing data availability costs. A channel can be split in [frames][channel-frame] in order to be transmitted via [batcher transactions][batcher-transaction]. The reason to split a channel into frames is that a channel might be too large to include in a single batcher transaction. A channel is uniquely identified by its timestamp (UNIX time at which the channel was created) and a random value. See the [Frame Format][frame-format] section of the L2 Chain Derivation specification for more information. [frame-format]: ../protocol/consensus/derivation.md#frame-format On the side of the [rollup node][rollup-node] (which is the consumer of channels), a channel is considered to be *opened* if its final frame (explicitly marked as such) has not been read, or closed otherwise. #### Channel Frame [channel-frame]: glossary.md#channel-frame A channel frame is a chunk of data belonging to a [channel]. [Batcher transactions][batcher-transaction] carry one or multiple frames. The reason to split a channel into frames is that a channel might too large to include in a single batcher transaction. #### Batcher [batcher]: glossary.md#batcher A batcher is a software component (independent program) that is responsible to make channels available on a data availability provider. The batcher communicates with the rollup node in order to retrieve the channels. The channels are then made available using [batcher transactions][batcher-transaction]. > **TODO** In the future, we might want to make the batcher responsible for constructing the channels, letting it only > query the rollup node for L2 block inputs. #### Batcher Transaction [batcher-transaction]: glossary.md#batcher-transaction A batcher transaction is a transaction submitted by a [batcher] to a data availability provider, in order to make channels available. These transactions carry one or more full frames, which may belong to different channels. A channel's frames may be split between multiple batcher transactions. When submitted to Ethereum calldata, the batcher transaction's receiver must be the sequencer inbox address. The transaction must also be signed by a recognized batch submitter account. The recognized batch submitter account is stored in the [System Configuration][system-config]. #### Batch submission frequency Within the [sequencing-window] constraints the batcher is permitted by the protocol to submit L2 blocks for data-availability at any time. The batcher software allows for dynamic policy configuration by its operator. The rollup enforces safety guarantees and liveness through the sequencing window, if the batcher does not submit data within this allotted time. By submitting new L2 data in smaller more frequent steps, there is less delay in confirmation of the L2 block inputs. This allows verifiers to ensure safety of L2 blocks sooner. This also reduces the time to finality of the data on L1, and thus the time to L2 input-finality. By submitting new L2 data in larger less frequent steps, there is more time to aggregate more L2 data, and thus reduce fixed overhead of the batch-submission work. This can reduce batch-submission costs, especially for lower throughput chains that do not fill data-transactions (typically 128 KB of calldata, or 800 KB of blobdata) as quickly. #### Channel Timeout [channel-timeout]: glossary.md#channel-timeout The channel timeout is a duration (in L1 blocks) during which [channel frames][channel-frame] may land on L1 within [batcher transactions][batcher-transaction]. The acceptable time range for the frames of a [channel][channel] is `[channel_id.timestamp, channel_id.timestamp + CHANNEL_TIMEOUT]`. The acceptable L1 block range for these frames are any L1 block whose timestamp falls inside this time range. (Note that `channel_id.timestamp` must be lower than the L1 block timestamp of any L1 block in which frames of the channel are seen, or else these frames are ignored.) The purpose of channel timeouts is dual: * Avoid keeping old unclosed channel data around forever (an unclosed channel is a channel whose final frame was not sent). * Bound the number of L1 blocks we have to look back in order to decode [sequencer batches][sequencer-batch] from channels. This is particularly relevant during L1 re-orgs, see the [Resetting Channel Buffering][reset-channel-buffer] section of the L2 Chain Derivation specification for more information. [reset-channel-buffer]: ../protocol/consensus/derivation.md#resetting-channel-buffering > **TODO** specify `CHANNEL_TIMEOUT` ### L2 Output Root Proposals [l2-output-root-proposals]: glossary.md#l2-output-root-proposals #### Proposer [proposer]: glossary.md#proposer The proposer's role is to construct and submit output roots, which are commitments to the L2's state, to the L2OutputOracle contract on L1 (the settlement layer). To do this, the proposer periodically queries the rollup node for the latest output root derived from the latest finalized L1 block. It then takes the output root and submits it to the L2OutputOracle contract on the settlement layer (L1). ### L2 Chain Derivation [derivation]: glossary.md#L2-chain-derivation L2 chain derivation is a process that reads [L2 derivation inputs][deriv-inputs] from L1 in order to derive the L2 chain. See the [L2 chain derivation specification][derivation-spec] for more details. #### L2 Derivation Inputs [deriv-inputs]: glossary.md#l2-derivation-inputs This term refers to data that is found in L1 blocks and is read by the [rollup node][rollup-node] to construct [payload attributes][payload-attr]. L2 derivation inputs include: * L1 block attributes * block number * timestamp * basefee * blob base fee * [deposits] (as log data) * [sequencer batches][sequencer-batch] (as transaction data) * [System configuration][system-config] updates (as log data) #### System Configuration This term refers to the collection of dynamically configurable rollup parameters maintained by the [`SystemConfig`](../protocol/consensus/derivation.md#system-configuration) contract on L1 and read by the L2 [derivation] process. These parameters enable keys to be rotated regularly and external cost parameters to be adjusted without the network upgrade overhead of a hardfork. See the [System Configuration](../protocol/consensus/derivation.md#system-configuration) section for a full overview. #### Payload Attributes [payload-attr]: glossary.md#payload-attributes This term refers to an object that can be derived from [L2 chain derivation inputs][deriv-inputs] found on L1, which are then passed to the [execution engine][execution-engine] to construct L2 blocks. The payload attributes object essentially encodes [a block without output properties][block]. Payload attributes are originally specified in the [Ethereum Engine API specification][engine-api], which we expand in the [Execution Engine Specification][exec-engine]. See also the [Building The Payload Attributes][building-payload-attr] section of the rollup node specification. [building-payload-attr]: ../protocol/consensus/index.md#building-the-payload-attributes #### L2 Genesis Block [l2-genesis]: glossary.md#l2-genesis-block The L2 genesis block is the first block of the L2 chain in its current version. The state of the L2 genesis block comprises: * State inherited from the previous version of the L2 chain. * This state was possibly modified by "state surgeries". For instance, the migration to Bedrock entailed changes on how native ETH balances were stored in the storage trie. * [Predeployed contracts][predeploy] The timestamp of the L2 genesis block must be a multiple of the [block time][block-time] (i.e. a even number, since the block time is 2 seconds). When updating the rollup protocol to a new version, we may perform a *squash fork*, a process that entails the creation of a new L2 genesis block. This new L2 genesis block will have block number `X + 1`, where `X` is the block number of the final L2 block before the update. A squash fork is not to be confused with a *re-genesis*, a similar process that we employed in the past, which also resets L2 block numbers, such that the new L2 genesis block has number 0. We will not employ re-genesis in the future. Squash forks are superior to re-geneses because they avoid duplicating L2 block numbers, which breaks a lot of external tools. #### L2 Chain Inception [l2-chain-inception]: glossary.md#L2-chain-inception The L1 block number at which the output roots for the [genesis block][l2-genesis] were proposed on the [output oracle][output-oracle] contract. In the current implementation, this is the L1 block number at which the output oracle contract was deployed or upgraded. #### Safe L2 Block [safe-l2-block]: glossary.md#safe-l2-block A safe L2 block is an L2 block that can be derived entirely from L1 by a [rollup node][rollup-node]. This can vary between different nodes, based on their view of the L1 chain. #### Safe L2 Head [safe-l2-head]: glossary.md#safe-l2-head The safe L2 head is the highest [safe L2 block][safe-l2-block] that a [rollup node][rollup-node] knows about. #### Unsafe L2 Block [unsafe-l2-block]: glossary.md#unsafe-l2-block An unsafe L2 block is an L2 block that a [rollup node][rollup-node] knows about, but which was not derived from the L1 chain. In sequencer mode, this will be a block sequenced by the sequencer itself. In validator mode, this will be a block acquired from the sequencer via [unsafe sync][unsafe-sync]. #### Unsafe L2 Head [unsafe-l2-head]: glossary.md#unsafe-l2-head The unsafe L2 head is the highest [unsafe L2 block][unsafe-l2-block] that a [rollup node][rollup-node] knows about. #### Unsafe Block Consolidation [consolidation]: glossary.md#unsafe-block-consolidation Unsafe block consolidation is the process through which the [rollup node][rollup-node] attempts to move the [safe L2 head][safe-l2-head] a block forward, so that the oldest [unsafe L2 block][unsafe-l2-block] becomes the new safe L2 head. In order to perform consolidation, the node verifies that the [payload attributes][payload-attr] derived from the L1 chain match the oldest unsafe L2 block exactly. See the [Engine Queue section][engine-queue] of the L2 chain derivation spec for more information. [engine-queue]: ../protocol/consensus/derivation.md#engine-queue #### Finalized L2 Head [finalized-l2-head]: glossary.md#finalized-l2-head The finalized L2 head is the highest L2 block that can be derived from *[finalized][finality]* L1 blocks — i.e. L1 blocks older than two L1 epochs (64 L1 [time slots][time-slot]). [finality]: https://hackmd.io/@prysmaticlabs/finality ### Other L2 Chain Concepts #### Address Aliasing [address-aliasing]: glossary.md#address-aliasing When a contract submits a [deposit][deposits] from L1 to L2, its address (as returned by `ORIGIN` and `CALLER`) will be aliased with a modified representation of the address of a contract. * cf. [Deposit Specification](../protocol/bridging/deposits.md#address-aliasing) #### Rollup Node [rollup-node]: glossary.md#rollup-node The rollup node is responsible for [deriving the L2 chain][derivation] from the L1 chain (L1 [blocks][block] and their associated [receipts][receipt]). The rollup node can run either in *validator* or *sequencer* mode. In sequencer mode, the rollup node receives L2 transactions from users, which it uses to create L2 blocks. These are then submitted to a [data availability provider][avail-provider] via [batch submission][batch-submission]. The L2 chain derivation then acts as a sanity check and a way to detect L1 chain [re-orgs][reorg]. In validator mode, the rollup node performs derivation as indicated above, but is also able to "run ahead" of the L1 chain by getting blocks directly from the sequencer, in which case derivation serves to validate the sequencer's behavior. A rollup node running in validator mode is sometimes called *a replica*. > **TODO** expand this to include output root submission See the [rollup node specification][rollup-node-spec] for more information. #### Rollup Driver [rollup driver]: glossary.md#rollup-driver The rollup driver is the [rollup node][rollup-node] component responsible for [deriving the L2 chain][derivation] from the L1 chain (L1 [blocks][block] and their associated [receipts][receipt]). > **TODO** delete this entry, alongside its reference — can be replaced by "derivation process" or "derivation logic" > where needed #### L1 Attributes Predeployed Contract [l1-attr-predeploy]: glossary.md#l1-attributes-predeployed-contract A [predeployed contract][predeploy] on L2 that can be used to retrieve the L1 block attributes of L1 blocks with a given block number or a given block hash. cf. [L1 Attributes Predeployed Contract Specification](../protocol/bridging/deposits.md#l1-attributes-predeployed-contract) #### L2 Output Root [l2-output]: glossary.md#l2-output-root A 32 byte value which serves as a commitment to the current state of the L2 chain. cf. [Proposer](../protocol/fault-proof/proposer.md) #### L2 Output Oracle Contract [output-oracle]: glossary.md#l2-output-oracle-contract An L1 contract to which [L2 output roots][l2-output] are posted by the [sequencer]. #### Validator [validator]: glossary.md#validator A validator is an entity (individual or organization) that runs a [rollup node][rollup-node] in validator mode. Doing so grants a lot of benefits similar to running an Ethereum node, such as the ability to simulate L2 transactions locally, without rate limiting. It also lets the validator verify the work of the [sequencer], by re-deriving [output roots][l2-output] and comparing them against those submitted by the sequencer. In case of a mismatch, the validator can perform a [fault proof][fault-proof]. #### Fault Proof [fault-proof]: glossary.md#fault-proof An on-chain *interactive* proof, performed by [validators][validator], that demonstrates that a [sequencer] provided erroneous [output roots][l2-output]. cf. [Fault Proofs](../protocol/fault-proof/index.md) #### Time Slot [time-slot]: glossary.md#time-slot On L2, there is a block every 2 second (this duration is known as the [block time][block-time]). We say that there is a "time slot" every multiple of 2s after the timestamp of the [L2 genesis block][l2-genesis]. On L1, post-[merge], the time slots are every 12s. However, an L1 block may not be produced for every time slot, in case of even benign consensus issues. #### Block Time [block-time]: glossary.md#block-time The L2 block time is 2 second, meaning there is an L2 block at every 2s [time slot][time-slot]. Post-[merge], it could be said that the L1 block time is 12s as that is the L1 [time slot][time-slot]. However, in reality the block time is variable as some time slots might be skipped. Pre-merge, the L1 block time is variable, though it is on average 13s. #### Unsafe Sync [unsafe-sync]: glossary.md#unsafe-sync Unsafe sync is the process through which a [validator][validator] learns about [unsafe L2 blocks][unsafe-l2-block] from the [sequencer][sequencer]. These unsafe blocks will later need to be confirmed by the L1 chain (via [unsafe block consolidation][consolidation]). ### Execution Engine Concepts #### Execution Engine [execution-engine]: glossary.md#execution-engine The execution engine is responsible for executing transactions in blocks and computing the resulting state roots, receipts roots and block hash. Both L1 (post-[merge]) and L2 have an execution engine. On L1, the executed blocks can come from L1 block synchronization; or from a block freshly minted by the execution engine (using transactions from the L1 [mempool]), at the request of the L1 consensus layer. On L2, the executed blocks are freshly minted by the execution engine at the request of the [rollup node][rollup-node], using transactions [derived from L1 blocks][derivation]. In these specifications, "execution engine" always refer to the L2 execution engine, unless otherwise specified. * cf. [Execution Engine Specification][exec-engine] [deposits-spec]: ../protocol/bridging/deposits.md [system-config]: ../protocol/consensus/derivation.md#system-configuration [exec-engine]: ../protocol/execution/index.md [derivation-spec]: ../protocol/consensus/derivation.md [rollup-node-spec]: ../protocol/consensus/index.md [mpt-details]: https://github.com/norswap/nanoeth/blob/d4c0c89cc774d4225d16970aa44c74114c1cfa63/src/com/norswap/nanoeth/trees/patricia/README.md [trie]: https://en.wikipedia.org/wiki/Trie [bloom filter]: https://en.wikipedia.org/wiki/Bloom_filter [Solidity events]: https://docs.soliditylang.org/en/latest/contracts.html?highlight=events#events [nano-header]: https://github.com/norswap/nanoeth/blob/cc5d94a349c90627024f3cd629a2d830008fec72/src/com/norswap/nanoeth/blocks/BlockHeader.java#L22-L156 [yellow]: https://ethereum.github.io/yellowpaper/paper.pdf [engine-api]: https://github.com/ethereum/execution-apis/blob/main/src/engine/shanghai.md#PayloadAttributesV2 [merge]: https://ethereum.org/en/eth2/merge/ [mempool]: https://www.quicknode.com/guides/defi/how-to-access-ethereum-mempool [L1 consensus layer]: https://github.com/ethereum/consensus-specs/#readme [cannon]: https://github.com/ethereum-optimism/cannon [eip4844]: https://www.eip4844.com/ ## Batcher [derivation spec]: consensus/derivation.md ### Overview The batcher, also referred to as the batch submitter, is the entity responsible for posting L2 sequencer data to L1, making it available to the derivation pipeline operated by verifiers. The format of batcher transactions — channels, frames, and batches within them — is defined in the [derivation spec]: the data is constructed from L2 blocks in the reverse order from which it is derived back into L2 blocks. Only data that conforms to those rules will be accepted as valid from the verifier's perspective. The batcher observes the gap between the unsafe L2 head (the latest sequenced block) and the safe L2 head (the latest block confirmed on L1 through derivation). Any unsafe L2 blocks that have not yet been confirmed must be encoded and submitted. The batcher encodes L2 blocks into channels, fragments channels into frames, and posts frames as L1 transactions. The derivation pipeline then reads those frames, reassembles channels, decodes batches, and reconstructs the original L2 blocks. The timing and transaction signing are implementation-specific: data can be submitted at any time, but only data that matches the [derivation spec] rules will be valid from the verifier perspective. The L2 view of safe and unsafe does not update instantly after data is submitted or confirmed on L1, so a batcher implementation must take care not to duplicate data submissions. ### Channel Lifecycle A channel is the unit of encoding used by the batcher. It is an ordered, compressed sequence of RLP-encoded L2 block batches. A channel is opened when there are L2 blocks awaiting submission and no channel is currently open. At most one channel may be open at any time; a new channel must not be opened until the previous one has been fully closed and all its frames have been submitted to L1. A channel accumulates L2 block batches in strictly increasing block number order until one of the following closure conditions is met. A channel must close when adding the next batch would cause the compressed output size to exceed the maximum blob data capacity, ensuring that no frame will carry a payload too large for its data availability target. A channel must also close when continued accumulation would cause the total uncompressed RLP byte length of its batches to exceed `max_rlp_bytes_per_channel`, a protocol limit that protects verifiers against decompression amplification. In both cases, the batch that would have caused the overflow is withheld from the current channel; the channel is closed, and that batch becomes the first entry of the next channel. A channel must additionally close on timeout: if the L1 chain advances more than `max_channel_duration` L1 blocks beyond the block at which the channel was opened, the channel must be closed and its frames posted immediately. This prevents channels from staying open indefinitely and ensures that verifiers — who drop any channel not completed within the `channel_timeout` window — do not discard the data. When a channel closes, its compressed data is partitioned into fixed-size frames. Each frame carries at most `max_frame_size` bytes of compressed payload plus per-frame header overhead. The resulting frames are queued for submission to L1 in order. The channel's block range — the contiguous interval of L2 block numbers it covers — is fixed upon closing and must not change. ### Frame Production and Ordering Each frame carries a header identifying the channel it belongs to via a 16-byte channel ID, its position within the channel as a monotonically increasing 16-bit frame number beginning at zero, the length of its compressed payload, and a boolean flag indicating whether it is the last frame in the channel. The first frame of each channel additionally carries a single version byte identifying the compression codec; all subsequent frames consist entirely of compressed payload with no such prefix. Frames within a channel must be submitted to L1 in sequential order. Frame `N` must appear on L1 no later than frame `N+1`. The derivation pipeline may tolerate out-of-order frame delivery in some configurations, but from the Holocene hardfork onward it drops any non-first frame whose frame number is not exactly one greater than the previous frame received for that channel, and drops any new first frame whose predecessor channel has not yet been closed. After Holocene activation, strict in-order delivery is required for correctness. The `is_last` flag must be set to true on exactly the final frame of a channel and false on all preceding frames. A verifier considers a channel complete only when a frame with `is_last` set is received. Any channel that never receives its final frame within the `channel_timeout` window is discarded by the verifier. ### Data Availability The batcher posts frames to L1 as batcher transactions addressed to the batcher inbox address, which is a designated EOA rather than a contract. Each batcher transaction must be signed by the batcher's signing key, and the recovered sender address must match the `batcherAddress` recorded in the L2 system configuration at the time of the L1 transaction's inclusion. The derivation pipeline authenticates batcher transactions by this address; transactions from any other sender are ignored regardless of their content. As of the Cancun L1 upgrade, the primary data availability mechanism is EIP-4844 blob transactions. Each blob carries one frame of compressed channel data. The maximum usable payload per blob is 130,044 bytes, which defines the effective `max_frame_size`. The batcher must not produce frames whose compressed payload exceeds this limit. All frames for a given channel must land on L1 within `channel_timeout` L1 blocks of the block in which the channel's first frame was included. If the channel is not completed within this window, the derivation pipeline discards all buffered frames for that channel, and the affected L2 blocks must be resubmitted in a new channel. The batcher must size channels and manage submission throughput to ensure frames are posted within this deadline. ### Block Continuity The batcher encodes L2 blocks in strictly increasing order by block number. Each block added to the open channel must be the direct child of the previously encoded block: its parent hash must equal the hash of the most recently encoded block. This invariant ensures the channel represents a contiguous, unambiguous segment of the canonical L2 chain. If the L2 chain reorganizes — manifesting as a block whose parent hash does not match the previously seen tip, or as an explicit reorg signal from the block source — the batcher must discard all pending encoding state. This includes the currently open channel, any channels queued for submission but not yet fully confirmed, and all in-flight submission tracking. After a reorg, the batcher restarts from the new canonical chain tip. L1 transactions already in flight at the time of the reorg are abandoned; if they are eventually included on L1, the derivation pipeline ignores them as they are incoherent with the new chain. Each channel covers a contiguous, non-overlapping range of L2 block numbers. The block range of a subsequent channel must begin exactly where the block range of the preceding channel ends. No L2 block may appear in more than one channel, and no blocks may be skipped between consecutive channels. ### Sequencer Drift and Throttling The derivation spec constrains how far the L2 timestamp may advance ahead of the L1 timestamp of its origin block. An L2 block's timestamp must not exceed the L1 origin timestamp plus `max_sequencer_drift`. Prior to the Fjord hardfork, `max_sequencer_drift` is a per-chain configuration parameter. From Fjord onward it is fixed at 1800 seconds. When this limit is exceeded, the derivation pipeline will only accept a batch if its transaction list is empty (a deposit-only block). The batcher must therefore not include user transactions in blocks whose timestamp would exceed the drift limit, and must coordinate with the sequencer accordingly. To prevent the sequencer from outpacing the batcher's L1 submission capacity, the batcher measures its data availability backlog — the total encoded size of L2 blocks that have been sequenced but whose data has not yet been confirmed on L1. When the backlog exceeds a configured threshold, the batcher signals the sequencer to reduce its block production rate. The throttle can be graduated: a modest backlog may request a modest slowdown, while a large backlog may pause block production entirely until the batcher catches up. This feedback mechanism is transparent to the derivation pipeline and is not reflected in any on-chain data. ### Compression Channel data is compressed before being partitioned into frames. Prior to the Fjord hardfork, channels use zlib compression (RFC 1950, no dictionary) and carry no version prefix; the zlib magic bytes in the stream allow the decompressor to identify the format. From Fjord onward, channels use Brotli compression (RFC 7932), and the first frame of each channel carries a version byte of `0x01` immediately before the compressed payload to identify the codec. The lower nibble of the version byte must not be `0x08` or `0x0f`, as those values would collide with zlib magic header bytes and confuse earlier decompressors. Because compression ratios vary with input content, the batcher must estimate the compressed output size prospectively as it encodes batches into a channel. The channel must be closed before the compressed output would exceed `max_frame_size`, rather than after. A common approach is to maintain a shadow compressor in parallel with the real compressor and treat the shadow's output size as an upper bound; the channel is closed when the shadow output reaches the limit. This ensures the batcher never produces a frame too large to fit within a blob. The maximum uncompressed RLP size per channel, `max_rlp_bytes_per_channel`, is enforced separately from the compressed size limit. This limit protects verifiers from decompression amplification: a small compressed payload that expands to an unboundedly large uncompressed stream could exhaust memory. A verifier decoding a channel stops processing once the uncompressed output reaches this limit; any remaining batches are discarded. The batcher must ensure the uncompressed size of its batches does not exceed this bound, both to guarantee all batches are seen by verifiers and to stay within the protocol's defined limits. ### Confirmation and Block Pruning The batcher tracks each submitted frame until it is included in an L1 block. A frame is confirmed when the batcher observes an L1 block containing the L1 transaction that carries the frame. A channel is fully confirmed when every one of its frames has been confirmed on L1. L2 blocks must not be discarded from the batcher's pending set until the channel containing them is fully confirmed. Until confirmation, those blocks must be retained so that any lost frames — for example due to an L1 reorg removing the transaction's inclusion — can be reconstructed and resubmitted. Only after a channel is fully confirmed may the batcher release the L2 blocks it covers. If a submitted frame's L1 transaction fails to be included, the batcher must resubmit that frame and all subsequent frames in the same channel. Resubmitted frames must be byte-identical to the originals: the derivation pipeline identifies frames by their channel ID and frame number, and a resubmitted frame with different content would be treated as corrupted data rather than as a retry. ### Hardfork Rules The Fjord hardfork changes the channel encoding format. Channels opened after Fjord activation must use Brotli compression and prefix the first frame's payload with version byte `0x01`. The protocol limit `max_rlp_bytes_per_channel` increases substantially at Fjord activation, relaxing the channel size constraint. Channels opened before Fjord activation must use the pre-Fjord format for all their frames, regardless of when those frames are posted. The Holocene hardfork imposes strict ordering requirements at both the frame and batch layers. At the frame layer, frames for a given channel must be delivered to the derivation pipeline contiguously and in order; a non-first frame that is not the immediate successor of the previously seen frame for that channel is dropped immediately, and an incomplete channel is dropped if a new first frame for it arrives before its final frame has been seen. At the batch layer, batches within a channel must be strictly ordered by L2 timestamp with no repeated timestamps; any batch with a timestamp not strictly greater than the previous batch in the same channel causes the channel to be invalidated and all remaining batches in it to be dropped. These rules impose no new on-chain obligations, but they mean the batcher has zero tolerance for frame delivery gaps or reordering after Holocene activation. ## Overview Base is a rollup built on Ethereum. L2 transaction data is posted to Ethereum for data availability, and proofs allow anyone to challenge invalid state transitions. This page gives a high-level tour of the protocol components and the core user flows. ### Network Participants There are three primary actors that interact with Base: users, sequencers, and validators. ```mermaid graph TD EthereumL1(Ethereum L1) subgraph "L2 Participants" Users(Users) Sequencers(Sequencers) Validators(Validators) end Validators -.->|fetch transaction batches| EthereumL1 Validators -.->|fetch deposit data| EthereumL1 Validators -->|submit/validate/challenge output proposals| EthereumL1 Validators -.->|fetch realtime P2P updates| Sequencers Users -->|submit deposits/withdrawals| EthereumL1 Users -->|submit transactions| Sequencers Users -->|query data| Validators Sequencers -->|submit transaction batches| EthereumL1 Sequencers -.->|fetch deposit data| EthereumL1 classDef l1Contracts stroke:#bbf,stroke-width:2px; classDef l2Components stroke:#333,stroke-width:2px; classDef systemUser stroke:#f9a,stroke-width:2px; class EthereumL1 l1Contracts; class Users,Sequencers,Validators l2Components; ``` #### Users Users are the general class of network participants who: * Submit transactions through the sequencer or by interacting with contracts on Ethereum. * Query transaction data from interfaces operated by validators. #### Sequencers The sequencer fills the role of block producer on Base. Base currently operates with a single active sequencer. The Sequencer: * Accepts transactions directly from Users. * Observes "deposit" transactions generated on Ethereum. * Consolidates both transaction streams into ordered L2 blocks. * Submits information to L1 that is sufficient to fully reproduce those L2 blocks. * Provides real-time access to pending L2 blocks that have not yet been confirmed on L1. * Produces Flashblocks every 200ms, committing to the ordering of transactions within the block as it is being built. The Sequencer serves an important role for the operation of an L2 chain but is not a trusted actor. The Sequencer is generally responsible for improving the user experience by ordering transactions much more quickly and cheaply than would currently be possible if users were to submit all transactions directly to L1. #### Validators Validators execute the L2 state transition function independently of the Sequencer. Validators help to maintain the integrity of the network and serve blockchain data to Users. Validators generally: * Sync rollup data from L1 and the Sequencer. * Use rollup data to execute the L2 state transition function. * Serve rollup data and computed L2 state information to Users. Validators can also act as Proposers and/or Challengers who: * Submit assertions about the state of the L2 to a smart contract on L1. * Validate assertions made by other participants. * Dispute invalid assertions made by other participants. ### High-Level System Diagram The following diagram shows how the major protocol components interact across L1 and L2. ```mermaid graph LR subgraph "Ethereum L1" OptimismPortal(OptimismPortal) BatchInbox(Batch Inbox Address) DisputeGameFactory(DisputeGameFactory) end subgraph "L2 Node" RollupNode(Consensus) ExecutionEngine(Execution Engine) end Batcher(Batcher) Proposers(Proposers) Challengers(Challengers) Users(Users) Users -->|deposits / withdrawals| OptimismPortal Users -->|transactions| ExecutionEngine Batcher -->|post transaction batches| BatchInbox Batcher -.->|fetch batch data| RollupNode RollupNode -.->|fetch batches| BatchInbox RollupNode -.->|fetch deposit events| OptimismPortal RollupNode -->|Engine API| ExecutionEngine Proposers -->|submit output proposals| DisputeGameFactory Proposers -.->|fetch outputs| RollupNode Challengers -->|verify / challenge games| DisputeGameFactory OptimismPortal -.->|query state proposals| DisputeGameFactory classDef l1Contracts stroke:#bbf,stroke-width:2px; classDef l2Components stroke:#333,stroke-width:2px; classDef systemUser stroke:#f9a,stroke-width:2px; class OptimismPortal,BatchInbox,DisputeGameFactory l1Contracts; class RollupNode,ExecutionEngine l2Components; class Batcher,Proposers,Challengers,Users systemUser; ``` ### Protocol Components #### Consensus Consensus is responsible for deriving the canonical L2 chain from L1 data. It reads transaction batches from the Batch Inbox and deposit events from OptimismPortal, constructs payload attributes, and drives the execution engine via the Engine API. Unsafe (unconfirmed) blocks are gossiped to other nodes over a dedicated P2P network to give validators low-latency access before batches land on L1. [Consensus →](./consensus/) ```mermaid graph LR L1(Ethereum L1) subgraph "Rollup Node" BatchDecoding(Batch Decoding) Derivation(Derivation Pipeline) end EngineAPI(Engine API) EE(Execution Engine) L2(L2 Blocks) L1 -->|batches + deposit events| BatchDecoding BatchDecoding --> Derivation Derivation -->|payload attributes| EngineAPI EngineAPI --> EE EE --> L2 classDef l1 stroke:#bbf,stroke-width:2px; classDef l2 stroke:#333,stroke-width:2px; class L1 l1; class EE,L2 l2; ``` #### Execution The execution engine is a Reth-based runtime. It exposes the standard Ethereum JSON-RPC API and processes blocks produced by consensus. Predeploys (system contracts at fixed L2 addresses), precompiles, and preinstalls extend the EVM for rollup-specific functionality such as fee distribution, L1 block attribute injection, and cross-domain messaging. [Execution →](./execution/) #### Bridging Deposits flow from the `OptimismPortal` contract on L1 into L2 as special deposit transactions included at the start of each L2 block. Withdrawals flow in the opposite direction: a withdrawal transaction is initiated on L2, a proposer submits an output root to `DisputeGameFactory`, and after the challenge period the user proves and finalizes the withdrawal on L1 via `OptimismPortal`. [Bridging →](./bridging/deposits) ```mermaid graph LR subgraph "Deposit Path" User1(User) OP1(OptimismPortal) DepTx(Deposit Transaction on L2) end subgraph "Withdrawal Path" User2(User) WdTx(Withdrawal Tx on L2) DGF(DisputeGameFactory) OP2(OptimismPortal) end User1 -->|depositTransaction| OP1 OP1 -->|TransactionDeposited event| DepTx User2 -->|initiates withdrawal| WdTx WdTx -->|output root proposed| DGF User2 -->|prove + finalize| OP2 OP2 -.->|verify game| DGF classDef l1 stroke:#bbf,stroke-width:2px; classDef systemUser stroke:#f9a,stroke-width:2px; class OP1,OP2,DGF l1; class User1,User2 systemUser; ``` #### Batcher The batcher is a service run by the sequencer that compresses L2 transaction data into channel frames and posts them as calldata (or blobs) to the Batch Inbox Address on L1. This is the data availability layer that allows any validator to independently reconstruct the L2 chain from L1. [Batcher →](./batcher) ```mermaid graph LR Sequencer(Sequencer) Batcher(Batcher) BatchInbox(Batch Inbox Address) RollupNode(Rollup Node) Sequencer -->|L2 blocks| Batcher Batcher -->|compressed channel frames| BatchInbox BatchInbox -.->|fetch batches| RollupNode classDef l1 stroke:#bbf,stroke-width:2px; classDef l2 stroke:#333,stroke-width:2px; classDef systemUser stroke:#f9a,stroke-width:2px; class BatchInbox l1; class RollupNode l2; class Batcher,Sequencer systemUser; ``` #### Proofs Output proposals and proofs allow permissionless verification of the L2 state. Anyone can propose an output root to the `DisputeGameFactory`, and anyone can challenge it. Disputes are resolved by the `FaultDisputeGame` contract using the Cannon VM for on-chain execution tracing of disputed state transitions. Valid withdrawals can only be finalized through `OptimismPortal` once the associated dispute game resolves in favor of the proposer. [Proofs →](./fault-proof/) ```mermaid graph LR Proposer(Proposer) DGF(DisputeGameFactory) FDG(FaultDisputeGame) Challengers(Challengers) OP(OptimismPortal) Proposer -->|submit output root| DGF DGF -->|create game| FDG Challengers -->|challenge / defend| FDG FDG -->|resolved result| OP classDef l1 stroke:#bbf,stroke-width:2px; classDef systemUser stroke:#f9a,stroke-width:2px; class DGF,FDG,OP l1; class Proposer,Challengers systemUser; ``` ### Core User Flows #### Depositing ETH to Base Users will often begin their L2 journey by depositing ETH from L1. Once they have ETH to pay fees, they'll start sending transactions on L2. The following diagram demonstrates this interaction and key Base protocol components. ```mermaid graph TD subgraph "Ethereum L1" OptimismPortal(OptimismPortal) BatchInbox(Batch Inbox Address) end Sequencer(Sequencer) Users(Users) %% Interactions Users -->|1. submit deposit| OptimismPortal Sequencer -.->|2. fetch deposit events| OptimismPortal Sequencer -->|3. generate deposit block| Sequencer Users -->|4. send transactions| Sequencer Sequencer -->|5. submit transaction batches| BatchInbox classDef l1Contracts stroke:#bbf,stroke-width:2px; classDef l2Components stroke:#333,stroke-width:2px; classDef systemUser stroke:#f9a,stroke-width:2px; class OptimismPortal,BatchInbox l1Contracts; class Sequencer l2Components; class Users systemUser; ``` #### Sending Transactions on Base Sending transactions on Base works the same as on Ethereum. Users sign transactions and submit them via `eth_sendRawTransaction` to any node's JSON-RPC endpoint. The sequencer picks them up from its mempool, orders them into L2 blocks, and eventually posts the batch to L1. #### Withdrawing from Base Users may also want to withdraw ETH or ERC20 tokens from Base back to Ethereum. Withdrawals are initiated as standard transactions on L2 but are then completed using transactions on L1. Withdrawals must reference a valid `FaultDisputeGame` contract that proposes the state of the L2 at a given point in time. ```mermaid graph LR subgraph "Ethereum L1" BatchInbox(Batch Inbox Address) DisputeGameFactory(DisputeGameFactory) FaultDisputeGame(FaultDisputeGame) OptimismPortal(OptimismPortal) ExternalContracts(External Contracts) end Sequencer(Sequencer) Proposers(Proposers) Users(Users) %% Interactions Users -->|1. send withdrawal initialization txn| Sequencer Sequencer -->|2. submit transaction batch| BatchInbox Proposers -->|3. submit output proposal| DisputeGameFactory DisputeGameFactory -->|4. generate game| FaultDisputeGame Users -->|5. submit withdrawal proof| OptimismPortal Users -->|6. wait for finalization| FaultDisputeGame Users -->|7. submit withdrawal finalization| OptimismPortal OptimismPortal -->|8. check game validity| FaultDisputeGame OptimismPortal -->|9. execute withdrawal transaction| ExternalContracts %% Styling classDef l1Contracts stroke:#bbf,stroke-width:2px; classDef l2Components stroke:#333,stroke-width:2px; classDef systemUser stroke:#f9a,stroke-width:2px; class BatchInbox,DisputeGameFactory,FaultDisputeGame,OptimismPortal l1Contracts; class Sequencer l2Components; class Users,Proposers systemUser; ``` ## Multithreaded Cannon Fault Proof Virtual Machine ### Overview This is a description of the second iteration of the Cannon Fault Proof Virtual Machine (FPVM). When necessary to distinguish this version from the initial implementation, it can be referred to as Multithreaded Cannon (MTCannon). Similarly, the original Cannon implementation can be referred to as Singlethreaded Cannon (STCannon) where necessary for clarity. The MTCannon FPVM emulates a minimal uniprocessor Linux-based system running on big-endian 64-bit MIPS64 architecture. A lot of its behaviors are copied from Linux/MIPS with a few tweaks made for fault proofs. For the rest of this doc, we refer to the MTCannon FPVM as simply the FPVM. Operationally, the FPVM is a state transition function. This state transition is referred to as a *Step*, that executes a single instruction. We say the VM is a function $f$, given an input state $S\_{pre}$, steps on a single instruction encoded in the state to produce a new state $S\_{post}$. $$f(S\_{pre}) \rightarrow S\_{post}$$ Thus, the trace of a program executed by the FPVM is an ordered set of VM states. #### Definitions ##### Concepts ##### Natural Alignment A memory address is said to be "naturally aligned" in the context of some data type if it is a multiple of that data type's byte size. For example, the address of a 32-bit (4-byte) value is naturally aligned if it is a multiple of 4 (e.g. `0x1000`, `0x1004`). Similarly, the address of a 64-bit (8-byte) value is naturally aligned if it is a multiple of 8 (e.g. `0x1000`, `0x1008`). A non-aligned address can be naturally aligned by dropping the least significant bits of the address: `aligned = unaligned & ^(byteSize - 1)`. For example, to align the address `0x1002` targeting a 32-bit value: `aligned = 0x1002 & ^(0x3) = 0x1000`. ##### Data types * `Boolean` - An 8-bit boolean value equal to 0 (false) or 1 (true). * `Hash` - A 256-bit fixed-size value produced by the Keccak-256 cryptographic hash function. * `UInt8` - An 8-bit unsigned integer value. * `UInt64` - A 64-bit unsigned integer value. * `Word` - A 64-bit value. ##### Constants * `EBADF` - A Linux error number indicating a bad file descriptor: `0x9`. * `MaxWord` - A `Word` with all bits set to 1: `0xFFFFFFFFFFFFFFFF`. When interpreted as a signed value, this is equivalent to -1. * `ProgramBreakAddress` - The fixed memory address for the program break: `Word(0x0000_4000_0000_0000)`. * `WordSize` - The number of bytes in a `Word` (8). #### New Features ##### Multithreading MTCannon adds support for [multithreading](https://en.wikipedia.org/wiki/Thread_\(computing\)). Thread management and scheduling are typically handled by the [operating system (OS) kernel](https://en.wikipedia.org/wiki/Kernel_%28operating_system%29): programs make thread-related requests to the OS kernel via [syscalls](https://en.wikipedia.org/wiki/System_call). As such, this implementation includes a few new Linux-specific thread-related [syscalls](#syscalls). Additionally, the [FPVM state](#fpvm-state) has been modified in order to track the set of active threads and thread-related global state. ##### 64-bit Architecture MTCannon emulates a MIPS64 machine whereas STCannon emulates a MIPS32 machine. The transition from MIPS32 to MIPS64 means the address space goes from 32-bit to 64-bit, greatly expanding addressable memory. ##### Robustness In the initial implementation of Cannon, unrecognized syscalls were treated as noops (see ["Noop Syscalls"](#noop-syscalls)). To ensure no unexpected behaviors are triggered, MTCannon will now raise an exception if unrecognized syscalls are encountered during program execution. ### Multithreading The MTCannon FPVM rotates between threads to provide [multitasking](https://en.wikipedia.org/wiki/Computer_multitasking) rather than true [parallel processing](https://en.wikipedia.org/wiki/Parallel_computing). The VM state holds an ordered set of thread state objects representing all executing threads. On any given step, there is one active thread that will be processed. #### Thread Management The FPVM state contains two thread stacks that are used to represent the set of all threads: `leftThreadStack` and `rightThreadStack`. An additional boolean value (`traverseRight`) determines which stack contains the currently active thread and how threads are rearranged when the active thread is preempted (see ["Thread Preemption"](#thread-preemption) for details). When traversing right, the thread on the top of the right stack is the active thread, the right stack is referred to as the "active" stack, and the left the "inactive" stack. Conversely, when traversing left, the active thread is on top of the left stack, the left stack is "active", and the right is "inactive". Representing the set of threads as two stacks allows for a succinct commitment to the contents of all threads. For details, see [“Thread Stack Hashing”](#thread-stack-hashing). #### Thread Traversal Mechanics Threads are traversed deterministically by moving from the first thread to the last thread, then from the last thread to the first thread repeatedly. For example, given the set of threads: {0,1,2,3}, the FPVM would traverse to each as follows: 0, 1, 2, 3, 3, 2, 1, 0, 0, 1, 2, 3, 3, 2, …. ##### Thread Preemption Threads are traversed via "preemption": the currently active thread is popped from the active stack and pushed to the inactive stack. If the active stack is empty, the FPVM state's `traverseRight` field is flipped ensuring that there is always an active thread. #### Exited Threads When the VM encounters an active thread that has exited, it is popped from the active thread stack, removing it from the VM state. #### Futex Operations The VM supports [futex syscall](https://www.man7.org/linux/man-pages/man2/futex.2.html) operations `FUTEX_WAIT_PRIVATE` and `FUTEX_WAKE_PRIVATE`. Futexes are commonly used to implement locks in user space. In this scenario, a shared 32-bit value (the "futex value") represents the state of a lock. If a thread cannot acquire the lock, it calls a futex wait, which puts the thread to sleep. To release the lock, the owning thread updates the futex value and then calls a futex wake to notify any other waiting threads. Because wake-ups may be spurious or could be triggered by unrelated operations on the same memory, waiting threads must always re-check the futex value after waking up to decide if they can proceed. ##### Wait When a futex wait is successfully executed, the current thread is simply [preempted](#thread-preemption). This gives other threads a chance to run and potentially change the shared futex value (for example, by releasing a lock). When the thread is eventually scheduled again, if the futex value has not changed the wakeup will be considered spurious and the thread will simply call futex wait again. ##### Wake When a futex wake is executed, the current thread is [preempted](#thread-preemption). This allows the scheduler to move on to other threads which may potentially be ready to run (for example, because a shared lock was released). #### Voluntary Preemption In addition to the [futex syscall](#futex-operations), there are a few other syscalls that will cause a thread to be "voluntarily" preempted: `sched_yield`, `nanosleep`. #### Forced Preemption To avoid thread starvation (for example where a thread hogs resources by never executing a sleep, yield, wait, etc.), the FPVM will force a context switch if the active thread has been executing too long. For each step executed on a particular thread, the state field `stepsSinceLastContextSwitch` is incremented. When a thread is preempted, `StepsSinceLastContextSwitch` is reset to 0. If `StepsSinceLastContextSwitch` reaches a maximum value (`SchedQuantum` = 100\_000), the FPVM preempts the active thread. ### Stateful Instructions #### Load Linked / Store Conditional Word The Load Linked Word (`ll`) and Store Conditional Word (`sc`) instructions provide the low-level primitives used to implement atomic read-modify-write (RMW) operations. A typical RMW sequence might play out as follows: * `ll` places a "reservation" targeting a 32-bit value in memory and returns the current value at this location. * Subsequent instructions take this value and perform some operation on it: * For example, maybe a counter variable is loaded and then incremented. * `sc` is called and the modified value overwrites the original value in memory only if the memory reservation is still intact. This RMW sequence ensures that if another thread or process modifies a reserved value while an atomic update is being performed, the reservation will be invalidated and the atomic update will fail. Prior to MTCannon, we could be assured that no intervening process would modify such a reserved value because STCannon is singlethreaded. With the introduction of multithreading, additional fields need to be stored in the FPVM state to track memory reservations initiated by `ll` operations. When an `ll` instruction is executed: * `llReservationStatus` is set to `1`. * `llAddress` is set to the virtual memory address specified by `ll`. * `llOwnerThread` is set to the `threadID` of the active thread. Only a single memory reservation can be active at a given time - a new reservation will clear any previous reservation. When the VM writes any data to memory, these `ll`-related fields are checked and any existing memory reservation is cleared if a memory write touches the naturally-aligned `Word` that contains `llAddress`. When an `sc` instruction is executed, the operation will only succeed if: * The `llReservationStatus` field is equal to `1`. * The active thread's `threadID` matches `llOwnerThread`. * The virtual address specified by `sc` matches `llAddress`. On success, `sc` stores a value to the specified address after it is naturally aligned, clears the memory reservation by zeroing out `llReservationStatus`, `llOwnerThread`, and `llAddress` and returns `1`. On failure, `sc` returns `0`. #### Load Linked / Store Conditional Doubleword With the transition to MIPS64, Load Linked Doubleword (`lld`), and Store Conditional Doubleword (`scd`) instructions are also now supported. These instructions are similar to `ll` and `sc`, but they operate on 64-bit rather than 32-bit values. The `lld` instruction functions similarly to `ll`, but the `llReservationStatus` is set to `2`. The `scd` instruction functions similarly to `sc`, but the `llReservationStatus` must be equal to `2` for the operation to succeed. In other words, an `scd` instruction must be preceded by a matching `lld` instruction just as the `sc` instruction must be preceded by a matching `ll` instruction if the store operation is to succeed. ### FPVM State #### State The FPVM is a state transition function that operates on a state object consisting of the following fields: 1. `memRoot` - \[`Hash`] A value representing the merkle root of VM memory. 2. `preimageKey` - \[`Hash`] The value of the last requested pre-image key. 3. `preimageOffset` - \[`Word`] The value of the last requested pre-image offset. 4. `heap` - \[`Word`] The base address of the most recent memory allocation via mmap. 5. `llReservationStatus` - \[`UInt8`] The current memory reservation status where: `0` means there is no reservation, `1` means an `ll`/`sc`-compatible reservation is active, and `2` means an `lld`/`scd`-compatible reservation is active. Memory is reserved via Load Linked Word (`ll`) and Load Linked Doubleword (`lld`) instructions. 6. `llAddress` - \[`Word`] If a memory reservation is active, the value of the address specified by the last `ll` or `lld` instruction. Otherwise, set to `0`. 7. `llOwnerThread` - \[`Word`] The id of the thread that initiated the current memory reservation or `0` if there is no active reservation. 8. `exitCode` - \[`UInt8`] The exit code value. 9. `exited` - \[`Boolean`] Indicates whether the VM has exited. 10. `step` - \[`UInt64`] A step counter. 11. `stepsSinceLastContextSwitch` - \[`UInt64`] A step counter that tracks the number of steps executed on the current thread since the last [preemption](#thread-preemption). 12. `traverseRight` - \[`Boolean`] Indicates whether the currently active thread is on the left or right thread stack, as well as some details on thread traversal mechanics. See ["Thread Traversal Mechanics"](#thread-traversal-mechanics) for details. 13. `leftThreadStack` - \[`Hash`] A hash of the contents of the left thread stack. For details, see the [“Thread Stack Hashing” section.](#thread-stack-hashing) 14. `rightThreadStack` - \[`Hash`] A hash of the contents of the right thread stack. For details, see the [“Thread Stack Hashing” section.](#thread-stack-hashing) 15. `nextThreadID` - \[`Word`] The value defining the id to assign to the next thread that is created. The state is represented by packing the above fields, in order, into a 188-byte buffer. #### State Hash The state hash is computed by hashing the 188-byte state buffer with the Keccak256 hash function and then setting the high-order byte to the respective VM status. The VM status can be derived from the state's `exited` and `exitCode` fields. ```rs enum VmStatus { Valid = 0, Invalid = 1, Panic = 2, Unfinished = 3, } fn vm_status(exit_code: u8, exited: bool) -> u8 { if exited { match exit_code { 0 => VmStatus::Valid, 1 => VmStatus::Invalid, _ => VmStatus::Panic, } } else { VmStatus::Unfinished } } ``` #### Thread State The state of a single thread is tracked and represented by a thread state object consisting of the following fields: 1. `threadID` - \[`Word`] A unique thread identifier. 2. `exitCode` - \[`UInt8`] The exit code value. 3. `exited` - \[`Boolean`] Indicates whether the thread has exited. 4. `pc` - \[`Word`] The program counter. 5. `nextPC` - \[`Word`] The next program counter. Note that this value may not always be $pc+4$ when executing a branch/jump delay slot. 6. `lo` - \[`Word`] The MIPS LO special register. 7. `hi` - \[`Word`] The MIPS HI special register. 8. `registers` - 32 general-purpose MIPS registers numbered 0 - 31. Each register contains a `Word` value. A thread is represented by packing the above fields, in order, into a 298-byte buffer. #### Thread Hash A thread hash is computed by hashing the 298-byte thread state buffer with the Keccak256 hash function. #### Thread Stack Hashing > **Note:** The `++` operation represents concatenation of 2 byte string arguments Each thread stack is represented in the FPVM state by a "hash onion" construction using the Keccak256 hash function. This construction provides a succinct commitment to the contents of a thread stack using a single `bytes32` value: * An empty stack is represented by the value: * `c0 = hash(bytes32(0) ++ bytes32(0))` * To push a thread to the stack, hash the concatenation of the current stack commitment with the thread hash: * `push(c0, el0) => c1 = hash(c0 ++ hash(el0))`. * To push another thread: * `push(c1, el1) => c2 = hash(c1 ++ hash(el1))`. * To pop an element from the stack, peel back the last hash (push) operation: * `pop(c2) => c3 = c1` * To prove the top value `elTop` on the stack, given some commitment `c`, you just need to reveal the `bytes32` commitment `c'` for the stack without `elTop` and verify: * `c = hash(c' ++ hash(elTop))` ### Memory Memory is represented as a binary merkle tree. The tree has a fixed-depth of 59 levels, with leaf values of 32 bytes each. This spans the full 64-bit address space, where each leaf contains the memory at that part of the tree. The state `memRoot` represents the merkle root of the tree, reflecting the effects of memory writes. As a result of this memory representation, all memory operations are `WordSize`-byte aligned. Memory access doesn't require any privileges. An instruction step can access any memory location as the entire address space is unprotected. #### Heap FPVM state contains a `heap` that tracks the base address of the most recent memory allocation. Heap pages are bump allocated at the page boundary, per `mmap` syscall. mmap-ing is purely to satisfy program runtimes that need the memory-pointer result of the syscall to locate free memory. The page size is 4096. The FPVM has a fixed program break at `ProgramBreakAddress`. However, the FPVM is permitted to extend the heap beyond this limit via mmap syscalls. For simplicity, there are no memory protections against "heap overruns" against other memory segments. Such VM steps are still considered valid state transitions. Specification of memory mappings is outside the scope of this document as it is irrelevant to the VM state. FPVM implementers may refer to the Linux/MIPS kernel for inspiration. ##### mmap hints When a process issues an mmap(2) syscall with a non-NULL addr parameter, the FPVM honors this hint as a strict requirement rather than a suggestion. The VM unconditionally maps memory at exactly the requested address, creating the mapping without performing address validity checks. The VM does not validate whether the specified address range overlaps with existing mappings. As this is a single-process execution environment, collision detection is delegated to userspace. The calling process must track its own page mappings to avoid mapping conflicts, as the usual kernel protections against overlapping mappings are not implemented. ### Delay Slots The post-state of a step updates the `nextPC`, indicating the instruction following the `pc`. However, in the case of where a branch instruction is being stepped, the `nextPC` post-state is set to the branch target. And the `pc` post-state set to the branch delay slot as usual. A VM state transition is invalid whenever the current instruction is a delay slot that is filled with jump or branch type instruction. That is, where $nextPC \neq pc + 4$ while stepping on a jump/branch instruction. Otherwise, there would be two consecutive delay slots. While this is considered "undefined" behavior in typical MIPS implementations, FPVM must raise an exception when stepping on such states. ### Syscalls Syscalls work similar to [Linux/MIPS](https://www.linux-mips.org/wiki/Syscall), including the syscall calling conventions and general syscall handling behavior. However, the FPVM supports a subset of Linux/MIPS syscalls with slightly different behaviors. These syscalls have identical syscall numbers and ABIs as Linux/MIPS. For all of the following syscalls, an error is indicated by setting the return register (`$v0`) to `MaxWord` and `errno` (`$a3`) is set accordingly. The VM must not modify any register other than `$v0` and `$a3` during syscall handling. The following tables summarize supported syscalls and their behaviors. If an unsupported syscall is encountered, the VM will raise an exception. #### Supported Syscalls | $v0 | system call | $a0 | $a1 | $a2 | $a3 | Effect | | ---- | -------------- | ---------------- | ----------------- | ------------ | ---------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | 5009 | mmap | uint64 addr | uint64 len | 🚫 | 🚫 | Allocates a page from the heap. See [heap](#heap) for details. | | 5012 | brk | 🚫 | 🚫 | 🚫 | 🚫 | Returns a fixed address for the program break at `ProgramBreakAddress` | | 5205 | exit\_group | uint8 exit\_code | 🚫 | 🚫 | 🚫 | Sets the exited and exitCode state fields to `true` and `$a0` respectively. | | 5000 | read | uint64 fd | char \*buf | uint64 count | 🚫 | Similar behavior as Linux/MIPS with support for unaligned reads. See [I/O](#io) for more details. | | 5001 | write | uint64 fd | char \*buf | uint64 count | 🚫 | Similar behavior as Linux/MIPS with support for unaligned writes. See [I/O](#io) for more details. | | 5070 | fcntl | uint64 fd | int64 cmd | 🚫 | 🚫 | Similar behavior as Linux/MIPS. Only the `F_GETFD`(1) and `F_GETFL` (3) cmds are supported. Sets errno to `0x16` for all other commands. | | 5055 | clone | uint64 flags | uint64 stack\_ptr | 🚫 | 🚫 | Creates a new thread based on the currently active thread's state. Supports a `flags` argument equal to `0x00050f00`, other values cause the VM to exit with exit\_code `VmStatus.PANIC`. | | 5058 | exit | uint8 exit\_code | 🚫 | 🚫 | 🚫 | Sets the active thread's exited and exitCode state fields to `true` and `$a0` respectively. | | 5023 | sched\_yield | 🚫 | 🚫 | 🚫 | 🚫 | Preempts the active thread and returns 0. | | 5178 | gettid | 🚫 | 🚫 | 🚫 | 🚫 | Returns the active thread's threadID field. | | 5194 | futex | uint64 addr | uint64 futex\_op | uint64 val | uint64 \*timeout | Supports `futex_op`'s `FUTEX_WAIT_PRIVATE` (128) and `FUTEX_WAKE_PRIVATE` (129). Other operations set errno to `0x16`. | | 5002 | open | 🚫 | 🚫 | 🚫 | 🚫 | Sets errno to `EBADF`. | | 5034 | nanosleep | 🚫 | 🚫 | 🚫 | 🚫 | Preempts the active thread and returns 0. | | 5222 | clock\_gettime | uint64 clock\_id | uint64 addr | 🚫 | 🚫 | Supports `clock_id`'s `REALTIME`(0) and `MONOTONIC`(1). For other `clock_id`'s, sets errno to `0x16`. Calculates a deterministic time value based on the state's `step` field and a constant `HZ` (10,000,000) where `HZ` represents the approximate clock rate (steps / second) of the FPVM:

`seconds = step/HZ`
`nsecs = (step % HZ) * 10^9/HZ`

Seconds are set at memory address `addr` and nsecs are set at `addr + WordSize`. | | 5038 | getpid | 🚫 | 🚫 | 🚫 | 🚫 | Returns 0. | | 5313 | getrandom | char \*buf | uint64 buflen | 🚫 | 🚫 | Generates pseudorandom bytes and writes them to the buffer at `buf`. Uses splitmix64 seeded with the current step count. Returns the number of bytes written, which is at most `buflen` and limited by alignment boundaries. | | 5284 | eventfd2 | uint64 initval | int64 flags | 🚫 | 🚫 | Creates an eventfd file descriptor. Only non-blocking mode is supported: if `flags` does not include `EFD_NONBLOCK` (0x80), sets errno to `0x16`. On success, returns file descriptor 100. | #### Noop Syscalls For the following noop syscalls, the VM must do nothing except to zero out the syscall return (`$v0`) and errno (`$a3`) registers. | $v0 | system call | | ---- | -------------------- | | 5011 | munmap | | 5010 | mprotect | | 5196 | sched\_get\_affinity | | 5027 | madvise | | 5014 | rt\_sigprocmask | | 5129 | sigaltstack | | 5013 | rt\_sigaction | | 5297 | prlimit64 | | 5003 | close | | 5016 | pread64 | | 5004 | stat | | 5005 | fstat | | 5247 | openat | | 5087 | readlink | | 5257 | readlinkat | | 5015 | ioctl | | 5285 | epoll\_create1 | | 5287 | pipe2 | | 5208 | epoll\_ctl | | 5272 | epoll\_pwait | | 5061 | uname | | 5100 | getuid | | 5102 | getgid | | 5026 | mincore | | 5225 | tgkill | | 5095 | getrlimit | | 5008 | lseek | | 5036 | setitimer | | 5216 | timer\_create | | 5217 | timer\_settime | | 5220 | timer\_delete | ### I/O The VM does not support Linux open(2). However, the VM can read from and write to a predefined set of file descriptors. | Name | File descriptor | Description | | ------------------ | --------------- | ----------------------------------------------------------------------------------------- | | stdin | 0 | read-only standard input stream. | | stdout | 1 | write-only standard output stream. | | stderr | 2 | write-only standard error stream. | | hint response | 3 | read-only. Used to read the status of [pre-image hinting](index.md#hinting). | | hint request | 4 | write-only. Used to provide [pre-image hints](index.md#hinting) | | pre-image response | 5 | read-only. Used to [read pre-images](index.md#pre-image-communication). | | pre-image request | 6 | write-only. Used to [request pre-images](index.md#pre-image-communication). | | eventfd | 100 | read-write. Created by `eventfd2` syscall. Reads return `EAGAIN`. Writes return `EAGAIN`. | Syscalls referencing unknown file descriptors fail with an `EBADF` errno as done on Linux. Writing to and reading from standard output, input and error streams have no effect on the FPVM state. FPVM implementations may use them for debugging purposes as long as I/O is stateless. All I/O operations are restricted to a maximum of `WordSize` bytes per operation. Any read or write syscall request exceeding this limit will be truncated to `WordSize` bytes. Consequently, the return value of read/write syscalls is at most `WordSize` bytes, indicating the actual number of bytes read/written. #### Standard Streams Writing to stderr/stdout standard stream always succeeds with the write count input returned, effectively continuing execution without writing work. Reading from stdin has no effect other than to return zero and errno set to 0, signalling that there is no input. #### Hint Communication Hint requests and responses have no effect on the VM state other than setting the `$v0` return register to the requested read/write count. VM implementations may utilize hints to setup subsequent pre-image requests. #### Pre-image Communication The `preimageKey` and `preimageOffset` state are updated via read/write syscalls to the pre-image read and write file descriptors (see [I/O](#io)). The `preimageKey` buffers the stream of bytes written to the pre-image write fd. The `preimageKey` buffer is shifted to accommodate new bytes written to the end of it. A write also resets the `preimageOffset` to 0, indicating the intent to read a new pre-image. When handling pre-image reads, the `preimageKey` is used to lookup the pre-image data from an Oracle. A max `WordSize`-byte chunk of the pre-image at the `preimageOffset` is read to the specified address. Each read operation increases the `preimageOffset` by the number of bytes requested (truncated to `WordSize` bytes and subject to alignment constraints). ##### Pre-image I/O Alignment As mentioned earlier in [memory](#memory), all memory operations are `WordSize`-byte aligned. Since pre-image I/O occurs on memory, all pre-image I/O operations must strictly adhere to alignment boundaries. This means the start and end of a read/write operation must fall within the same alignment boundary. If an operation were to violate this, the input `count` of the read/write syscall must be truncated such that the effective address of the last byte read/written matches the input effective address. The VM must read/write the maximum amount of bytes possible without crossing the input address alignment boundary. For example, the effect of a write request for a 3-byte aligned buffer must be exactly 3 bytes. If the buffer is misaligned, then the VM may write less than 3 bytes depending on the size of the misalignment. ### Exceptions The FPVM may raise an exception rather than output a post-state to signal an invalid state transition. Nominally, the FPVM must raise an exception in at least the following cases: * Invalid instruction (either via an invalid opcode or an instruction referencing registers outside the general purpose registers). * Unsupported syscall. * Pre-image read at an offset larger than the size of the pre-image. * Delay slot contains branch/jump instruction types. * Invalid thread state: the active thread stack is empty. VM implementations may raise an exception in other cases that is specific to the implementation. For example, an on-chain FPVM that relies on pre-supplied merkle proofs for memory access may raise an exception if the supplied merkle proof does not match the pre-state `memRoot`. ### Security Model #### Compiler Correctness MTCannon is designed to prove the correctness of a particular state transition that emulates a MIPS64 machine. MTCannon does not guarantee that the MIPS64 instructions correctly implement the program that the user intends to prove. As a result, MTCannon's use as a Fault Proof system inherently depends to some extent on the correctness of the compiler used to generate the MIPS64 instructions over which MTCannon operates. To illustrate this concept, suppose that a user intends to prove simple program `input + 1 = output`. Suppose then that the user's compiler for this program contains a bug and errantly generates the MIPS instructions for a slightly different program `input + 2 = output`. Although MTCannon would correctly prove the operation of this compiled program, the result proven would differ from the user's intent. MTCannon proves the MIPS state transition but makes no assertion about the correctness of the translation between the user's high-level code and the resulting MIPS program. As a consequence of the above, it is the responsibility of a program developer to develop tests that demonstrate that MTCannon is capable of proving their intended program correctly over a large number of possible inputs. Such tests defend against bugs in the user's compiler as well as ways in which the compiler may inadvertently break one of MTCannon's [Compiler Assumptions](#compiler-assumptions). Users of Fault Proof systems are strongly encouraged to utilize multiple proof systems and/or compilers to mitigate the impact of errant behavior in any one toolchain. #### Compiler Assumptions MTCannon makes the simplifying assumption that users are utilizing compilers that do not rely on MIPS exception states for standard program behavior. In other words, MTCannon generally assumes that the user's compiler generates spec-compliant instructions that would not trigger an exception. Refer to [Exceptions](#exceptions) for a list of conditions that are explicitly handled. Certain cases that would typically be asserted by a strict implementation of the MIPS64 specification are not handled by MTCannon as follows: * `add`, `addi`, and `sub` do not trigger an exception on signed integer overflow. * Instruction encoding validation does not trigger an exception for fields that should be zero. * Memory instructions do not trigger an exception when addresses are not naturally aligned. Many compilers, including the Golang compiler, will not generate code that would trigger these conditions under bug-free operation. Given the inherent reliance on [Compiler Correctness](#compiler-correctness) in applications using MTCannon, the tests and defense mechanisms that must necessarily be employed by MTCannon users to protect their particular programs against compiler bugs should also suffice to surface bugs that would break these compiler assumptions. Stated simply, MTCannon can rely on specific compiler behaviors because users inherently must employ safety nets to guard against compiler bugs. ## Fault Proof ### Overview A fault proof, also known as fraud proof or interactive game, consists of 3 components: * [Program]: given a commitment to all rollup inputs (L1 data) and the dispute, verify the dispute statelessly. * [VM]: given a stateless program and its inputs, trace any instruction step, and prove it on L1. * [Interactive Dispute Game]: bisect a dispute down to a single instruction, and resolve the base-case using the VM. Each of these 3 components may have different implementations, which can be combined into different proof stacks, and contribute to proof diversity when resolving a dispute. "Stateless execution" of the program, and its individual instructions, refers to reproducing the exact same computation by authenticating the inputs with a [Pre-image Oracle][oracle]. ![Diagram of Program and VM architecture](/static/assets/fault-proof.svg) ### Pre-image Oracle [oracle]: #pre-image-oracle The pre-image oracle is the only form of communication between the [Program] (in the Client role) and the [VM] (in the Server role). The program uses the pre-image oracle to query any input data that is understood to be available to the user: * The initial inputs to bootstrap the program. See [Bootstrapping](#bootstrapping). * External data not already part of the program code. See [Pre-image hinting routes](#pre-image-hinting-routes). The communication happens over a simple request-response wire protocol, see [Pre-image communication](#pre-image-communication). #### Pre-image key types Pre-images are identified by a `bytes32` type-prefixed key: * The first byte identifies the type of request. * The remaining 31 bytes identify the pre-image key. ##### Type `0`: Zero key The zero prefix is illegal. This ensures all pre-image keys are non-zero, enabling storage lookup optimizations and avoiding easy mistakes with invalid zeroed keys in the EVM. ##### Type `1`: Local key Information specific to the dispute: the remainder of the key may be an index, a string, a hash, etc. Only the contract(s) managing this dispute instance may serve the value for this key: it is localized and context-dependent. This type of key is used for program bootstrapping, to identify the initial input arguments by index or name. ##### Type `2`: Global keccak256 key This type of key uses a global pre-image store contract, and is fully context-independent and permissionless. I.e. every key must have a single unique value, regardless of chain history or time. Using a global store reduces duplicate pre-image registration work, and avoids unnecessary contract redeployments per dispute. This global store contract should be non-upgradeable. Since `keccak256` is a safe 32-byte hash input, the first byte is overwritten with a `2` to derive the key, while keeping the rest of the key "readable" (matching the original hash). ##### Type `3`: Global generic key Reserved. This scheme allows for unlimited application-layer pre-image types without fault-proof VM redeployments. This is a generic version of a global key store: `key = 0x03 ++ keccak256(x, sender)[1:]`, where: * `x` is a `bytes32`, which can be a hash of an arbitrary-length type of cryptographically secure commitment. * `sender` is a `bytes32` identifying the pre-image inserter address (left-padded to 32 bytes) This global store contract should be non-upgradeable. The global contract is permissionless: users can standardize around external contracts that verify pre-images (i.e. a specific `sender` will always be trusted for a specific kind of pre-image). The external contract verifies the pre-image before inserting it into the global store for usage by all fault proof VMs without requiring the VM or global store contract to be changed. Users may standardize around upgradeable external pre-image contracts, in case the implementation of the verification of the pre-image is expected to change. The store update function is `update(x bytes32, offset uint64, span uint8, value bytes32)`: * `x` is the `bytes32` `x` that the pre-image `key` is computed with. * Only part of the pre-image, starting at `offset`, and up to (incl.) 32 bytes `span` can be inserted at a time. * Pre-images may have an undefined length (e.g. a stream), we only need to know how many bytes of `value` are usable. * The key and offset will be hashed together to uniquely store the `value` and `span`, for later pre-image serving. This enables fault proof programs to adopt any new pre-image schemes without VM update or contract redeployment. It is up to the user to index the special pre-image values by this key scheme, as there is no way to revert it to the original commitment without knowing said commitment or value. ##### Type `4`: Global SHA2-256 key A SHA-256 pre-image. Key: the SHA-256 hash, with the first byte overwritten with the type byte: `4 ++ sha256(data)[1:]`. ##### Type `5`: Global EIP-4844 Point-evaluation key An EIP-4844 point-evaluation. In an EIP-4844 blob, 4096 field elements represent the blob data. It verifies `p(z) = y` given `commitment` that corresponds to the polynomial `p(x)` and a KZG proof. The value `y` is the pre-image. The value `z` is part of the key; the index of the point within the blob. The `commitment` is part of the key. Each element is proven with a point-evaluation. Key: `5 ++ keccak256(commitment ++ z)[1:]`, where: * `5` is the type byte * `++` is concatenation * `commitment` is a bytes48, representing the KZG commitment. * `z` is a big-endian `uint256` ##### Type `6`: Global Precompile key A precompile result. It maps directly to precompiles on Ethereum. This preimage key can be used to avoid running expensive precompile operations in the program. Key: `6 ++ keccak256(precompile ++ input)[1:]`, where: * `6` is the type byte * `++` is concatenation * `precompile` is the 20-byte address of the precompile contract * `input` is the input to the precompile contract The result is identical to that of a call to the precompile contract, prefixed with a revert indicator: * `reverted ++ precompile_result`. `reverted` is a 1-byte indicator with a `0` value if the precompile reverts for the given input, otherwise it's `1`. ##### Type `7-128`: reserved range Range start and end both inclusive. This range of key types is reserved for future usage by the core protocol. E.g. version changes, contract migrations, chain-data, additional core features, etc. `128` specifically (`1000 0000` in binary) is reserved for key-type length-extension (reducing the content part to `30` or less key bytes), if the need arises. ##### Type `129-255`: application usage This range of key types may be used by forks or customized versions of the fault proof protocol. #### Bootstrapping Initial inputs are deterministic, but not necessarily singular or global: there may be multiple different disputes at the same time, each with its own disputed claims and L1 context. To bootstrap, the program requests the initial inputs from the VM, using pre-image key type `1`. The VM is aware of the external context, and maps requested pre-image keys based on their type, i.e. a local lookup for type `1`, or global one for `2`, and optionally support other key-types. #### Hinting There is one more form of optional communication between client and server: pre-image hinting. Hinting is optional, and *is a no-op* in a L1 VM implementation. The hint itself comes at very low cost onchain: the hint can be a single `write` sys-call, which is instant as the memory to write as hint does not actually need to be loaded as part of the onchain proof. Hinting allows the program, when generating a proof offchain, to instruct the VM what data it is interested in. The VM can choose to execute the requested hint at any time: either locally (for standard requests), or in a modular form by redirecting the hint to tooling that may come with the VM program. Hints do not have to be executed directly: they may first just be logged to show the intents of the program, and the latest hint may be buffered for lazy execution, or dropped entirely when in read-only mode (like onchain). When the pre-image oracle serves a request, and the request cannot be served from an existing collection of pre-images (e.g. a local pre-image cache) then the VM can execute the hint to retrieve the missing pre-image(s). It is the responsibility of the program to provide sufficient hinting for every pre-image request. Some hints may have to be repeated: the VM only has to execute the last hint when handling a missing pre-image. Note that hints may produce multiple pre-images: e.g. a hint for an ethereum block with transaction list may prepare pre-images for the header, each of the transactions, and the intermediate merkle-nodes that form the transactions-list Merkle Patricia Trie. Hinting is implemented with a request-acknowledgement wire-protocol over a blocking two-way stream: ```text := := := big-endian uint32 # length of := byte sequence := 1-byte zero value ``` The ack informs the client that the hint has been processed. Servers may respond to hints and pre-image (see below) requests asynchronously as they are on separate streams. To avoid requesting pre-images that are not yet fetched, clients should request the pre-image only after it has observed the hint acknowledgement. #### Pre-image communication Pre-images are communicated with a minimal wire-protocol over a blocking two-way stream. This protocol can be implemented with blocking read/write syscalls. ```text := # the type-prefixed pre-image key := := big-endian uint64 # length of , note: uint64 ``` The `` here may be arbitrarily high: the client can stop reading at any time if the required part of the pre-image has been read. After the client writes new `` bytes, the server should be prepared to respond with the pre-image starting from `offset == 0` upon `read` calls. The server may limit `read` results artificially to only a small amount of bytes at a time, even though the full pre-image is ready: this is expected regular IO protocol, and the client will just have to continue to read the small results at a time, until 0 bytes are read, indicating EOF. This enables the server to serve e.g. at most 32 bytes at a time or align reads with VM memory structure, to limit the amount of VM state that changes per syscall instruction, and thus keep the proof size per instruction bounded. ### Fault Proof Program [Program]: #fault-proof-program The Fault Proof Program defines the verification of claims of the state-transition outputs of the L2 rollup as a pure function of L1 data. The `op-program` is the reference implementation of the program, based on `op-node` and `op-geth` implementations. The program consists of: * Prologue: load the inputs, given minimal bootstrapping, with possible test-overrides. * Main content: process the L2 state-transition, i.e. derive the state changes from the L1 inputs. * Epilogue: inspect the state changes to verify the claim. #### Prologue The program is bootstrapped with two primary inputs: * `l1_head`: the L1 block hash that will be perceived as the tip of the L1 chain, authenticating all prior L1 history. * `dispute`: identity of the claim to verify. Bootstrapping happens through special input requests to the host of the program. Additionally, there are *implied* inputs, which are *derived from the above primary inputs*, but can be overridden for testing purposes: * `l2_head`: the L2 block hash that will be perceived as the previously agreed upon tip of the L2 chain, authenticating all prior L2 history. * Chain configurations: chain configuration may be baked into the program, or determined from attributes of the identified `dispute` on L1. * `l1_chain_config`: The chain-configuration of the L1 chain (also known as `l1_genesis.json`) * `l2_chain_config`: The chain-configuration of the L2 chain (also known as `l2_genesis.json`) * `rollup_config`: The rollup configuration used by the rollup-node (also known as `rollup.json`) The implied inputs rely on L1-introspection to load attributes of the `dispute` through the [dispute game interface](stage-one/dispute-game-interface.md), in the L1 history up and till the specified `l1_head`. The `dispute` may be the claim itself, or a pointer to specific prior claimed data in L1, depending on the dispute game interface. Implied inputs are loaded in a "prologue" before the actual core state-transition function executes. During testing a simplified prologue that loads the overrides may be used. > Note: only the test-prologues are currently supported, since the dispute game interface is actively changing. #### Main content To verify a claim about L2 state, the program first reproduces the L2 state by applying L1 data to prior agreed L2 history. This process is also known as the [L2 derivation process](../consensus/derivation.md), and matches the processing in the [rollup node](../consensus/index.md) and [execution-engine](../execution/index.md). The difference is that rather than retrieving inputs from an RPC and applying state changes to disk, the inputs are loaded through the [pre-image oracle][oracle] and the changes accumulate in memory. The derivation executes with two data-sources: * Interface to read-only L1 chain, backed by the pre-image oracle: * The `l1_head` determines the view over the available L1 data: no later L1 data is available. * The implementation of the chain traverses the header-chain from the `l1_head` down to serve by-number queries. * The `l1_head` is the L1 unsafe head, safe head, and finalized head. * Interface to L2 engine API * Prior L2 chain history is backed by the pre-image oracle, similar to the L1 chain: * The initial `l2_head` determines the view over the initial available L2 history: no later L2 data is available. * The implementation of the chain traverses the header-chain from the `l2_head` down to serve by-number queries. * The `l2_head` is the initial L2 unsafe head, safe head, and finalized head. * New L2 chain history accumulates in memory. * Although the pre-image oracle can be used to retrieve data by hash if memory is limited, the program should prefer to keep the newly created chain data in memory, to minimize pre-image oracle access. * The L2 unsafe head, safe head, and finalized L2 head will potentially change as derivation progresses. * L2 state consists of the diff of changes in memory, and any unchanged state nodes accessible through the read-only L2 history view. See [Pre-image routes](#pre-image-hinting-routes) for specifications of the pre-image oracle backing of these data sources. Using these data-sources, the derivation pipeline is processed till we hit one of two conditions: * `EOF`: when we run out of L1 data, the L2 chain will not change further, and the epilogue can start. * Eager epilogue condition: depending on the type of claim to verify, if the L2 result is irreversible (i.e. no later L1 inputs can override it), the processing may end early when the result is ready. E.g. when asserting state at a specific L2 block, rather than the very tip of the L2 chain. #### Epilogue While the main-content produces the disputed L2 state already, the epilogue concludes what this means for the disputed claim. The program produces a binary output to verify the claim, using a standard single-byte Unix exit-code: * a `0` for success, i.e. the claim is correct. * a non-zero code for failure, i.e. the claim is incorrect. * `1` should be preferred for identifying an incorrect claim. * Other non-zero exit codes may indicate runtime failure, e.g. a bug in the program code may resolve in a kind of `panic` or unexpected error. Safety should be preferred over liveness in this case, and the `claim` will fail. To assert the disputed claim, the epilogue, like the main content, can introspect L1 and L2 chain data and post-process it further, to then make a statement about the claim with the final exit code. A disputed output-root may be disproven by first producing the output-root, and then comparing it: 1. Retrieve the output attributes from the L2 chain view: the state-root, block-hash, withdrawals storage-root. 2. Compute the output-root, as the [proposer should compute it](proposer.md#l2-output-commitment-construction). 3. If the output-root matches the `claim`, exit with code 0. Otherwise, exit with code 1. > Note: the dispute game interface is actively changing, and may require additional claim assertions. > the output-root epilogue may be replaced or extended for general L2 message proving. #### Pre-image hinting routes The fault proof program implements hint handling for the VM to use, as well as any program testing outside of VM environment. This can be exposed via a CLI, or alternative inter-process API. Every instance of `` in the below routes is `0x`-prefixed, lowercase, hex-encoded. ##### `l1-block-header ` Requests the host to prepare the L1 block header RLP pre-image of the block ``. ##### `l1-transactions ` Requests the host to prepare the list of transactions of the L1 block with ``: prepare the RLP pre-images of each of them, including transactions-list MPT nodes. ##### `l1-receipts ` Requests the host to prepare the list of receipts of the L1 block with ``: prepare the RLP pre-images of each of them, including receipts-list MPT nodes. ##### `l1-blob ` Requests the host to prepare EIP-4844 blob data for fault proof verification. The hint data consists of 48 bytes concatenated together: * Bytes 0-31: Blob version hash (32 bytes) - the keccak256 hash of the KZG commitment with version byte prefix * Bytes 32-39: Blob index within the block (8-byte big-endian uint64) * Bytes 40-47: L1 block timestamp (8-byte big-endian uint64) The host will: 1. Fetch the blob from the L1 beacon chain using the timestamp and blob hash 2. Compute the KZG commitment and prepare it as a [SHA256 preimage](#type-3-global-generic-key) 3. Prepare all 4096 field elements of the blob as [Blob-type preimages](#type-5-global-eip-4844-point-evaluation-key), keyed by `keccak256(commitment || rootOfUnity[i])` for evaluation at the standard roots of unity This hint is required for verifying transactions that use EIP-4844 blob data (post-Ecotone). ##### `l1-precompile-v2
` Requests the host to prepare the result of an L1 precompile call with gas validation. The hint data format: * Bytes 0-19: Precompile address (20 bytes) * Bytes 20-27: Required gas (8-byte big-endian uint64) * Bytes 28+: Input bytes The host validates the precompile address against an allowlist of accelerated precompiles and prepares a [precompile-type preimage](#type-6-global-precompile-key) of the execution result. The `requiredGas` parameter allows the preimage oracle to enforce complete precompile execution. This supersedes the earlier `l1-precompile ` format which did not include gas validation. ##### `l2-block-header ?` Requests the host to prepare the L2 block header RLP pre-image of the block ``. The `` is optionally concatenated after the `` as a big endian uint64 value to specify which L2 chain to retrieve data from. `` must be specified when the interop hard fork is active. ##### `l2-transactions ?` Requests the host to prepare the list of transactions of the L2 block with ``: prepare the RLP pre-images of each of them, including transactions-list MPT nodes. The `` is optionally concatenated after the `` as a big endian uint64 value to specify which L2 chain to retrieve data from. `` must be specified when the interop hard fork is active. ##### `l2-receipts ` Requests the host to prepare the list of receipts of the L2 block with `` for the specified ``: prepare the RLP pre-images of each of them, including receipts-list MPT nodes. This hint is used only when the interop hard fork is active. ##### `l2-code ?` Requests the host to prepare the L2 smart-contract code with the given ``. The `` is optionally concatenated after the `` as a big endian uint64 value to specify which L2 chain to retrieve data from. `` must be specified when the interop hard fork is active. ##### `l2-state-node ?` Requests the host to prepare the L2 MPT node preimage with the given ``. The `` is optionally concatenated after the `` as a big endian uint64 value to specify which L2 chain to retrieve data from. `` must be specified when the interop hard fork is active. ##### `l2-output ?` Requests the host to prepare the L2 Output at the l2 output root ``. The L2 Output is the preimage of a [computed output root](proposer.md#l2-output-commitment-construction). The `` is optionally concatenated after the `` as a big endian uint64 value to specify which L2 chain to retrieve data from. `` must be specified when the interop hard fork is active. ##### `l2-payload-witness ` Requests the host to prepare all preimages used in the building of the payload specified by ``. `` is a JSON object with the fields `parentBlockHash`, `payloadAttributes` and optionally `chainID`. The `chainID` must be specific when the interop hard fork is active. ##### `l2-account-proof ` Requests the host send account proof for a certain block hash and address. `` is hex encoded: 32-byte block hash + 20-byte address + 8 byte big endian chain ID. `l2-payload-witness` and `l2-account-proof` hints are preferred over the more granular `l2-code` and `l2-state-node`, and they should be sent before the more granular hints to ensure proper handling. ##### `l2-block-data ` Requests the host to prepare all preimages used in the building of the block specified by ``. `` is a hex encoded concatenation of the following: * 32-byte parent block hash * 32-byte block hash of the block to be prepared * 8-byte big-endian chain ID This hint is used only when the interop hard fork is active. #### Precompile Accelerators Precompiles that are too expensive to be executed in a fault-proof VM can be executed more efficiently using the pre-image oracle. This approach ensures that the fault proof program can complete a state transition in a reasonable amount of time. During program execution, the precompiles are substituted with interactions with pre-image oracle. The program hints the host for a precompile input. Which it the subsequently retrieves the result of the precompile operation using the [type 6 global precompile key](#type-6-global-precompile-key). All accelerated precompiles must be functionally equivalent to their EVM equivalent. ### Fault Proof VM [VM]: #fault-proof-vm A fault proof VM implements: * a smart-contract to verify a single execution-trace step, e.g. a single MIPS instruction. * a CLI command to generate a proof of a single execution-trace step. * a CLI command to compute a VM state-root at step N A fault proof VM relies on a fault proof program to provide an interface for fetching any missing pre-images based on hints. The VM emulates the program, as prepared for the VM target architecture, and generates the state-root or instruction proof data as requested through the VM CLI. Refer to the documentation of the fault proof VM for further usage information. Fault Proof VMs: * [Cannon]: big-endian 64-bit MIPS64 architecture, by OP Labs, in active development. * [cannon-rs]: Rust implementation of `Cannon`, by `@clabby`, deprecated. * [Asterisc]: little-endian 64-bit RISC-V architecture, by `@protolambda`, in active development. [Cannon]: https://github.com/ethereum-optimism/cannon [cannon-rs]: https://github.com/anton-rs/cannon-rs [Asterisc]: https://github.com/protolambda/asterisc ### Fault Proof Interactive Dispute Game [Interactive Dispute Game]: #fault-proof-interactive-dispute-game The interactive dispute game allows actors to resolve a dispute with an onchain challenge-response game that bisects to a disagreed block $n \rightarrow n + 1$ state transition, and then over the execution trace of the VM which models this state transition, bounded with a base-case that proves a single VM trace step. The game is multi-player: different non-aligned actors may participate when bonded. Response time is allocated based on the remaining time in the branch of the tree and alignment with the claim. The allocated response time is limited by the dispute-game window, and any additional time necessary based on L1 fee changes when bonds are insufficient. > Note: the timed, bonded, bisection dispute game is in development. > Also see [fault dispute-game specs](stage-one/fault-dispute-game.md) for fault dispute game system specifications, > And [dispute-game-interface specs](stage-one/dispute-game-interface.md) ## Proposer [g-rollup-node]: ../../reference/glossary.md#rollup-node [g-mpt]: ../../reference/glossary.md#merkle-patricia-trie [header-withdrawals-root]: ../../upgrades/isthmus/exec-engine.md#l2tol1messagepasser-storage-root-in-header ### Overview After processing one or more blocks the outputs will need to be synchronized with the settlement layer (L1) for trustless execution of L2-to-L1 messaging, such as withdrawals. These output proposals act as the bridge's view into the L2 state. Actors called "Proposers" submit the output roots to the settlement layer (L1) and can be contested with a proof, with a bond at stake if the proof is wrong. The [op-proposer][op-proposer] in one such implementation of a proposer. [op-proposer]: https://github.com/ethereum-optimism/optimism/tree/d48b45954c381f75a13e61312da68d84e9b41418/op-proposer [cannon]: https://github.com/ethereum-optimism/cannon ### Proposing L2 Output Commitments The proposer's role is to construct and submit output roots, which are commitments to the L2's state, to the `L2OutputOracle` contract on L1 (the settlement layer). To do this, the proposer periodically queries the [rollup node](../consensus/index.md) for the latest output root derived from the latest [finalized](../consensus/index.md#finalization-guarantees) L1 block. It then takes the output root and submits it to the `L2OutputOracle` contract on the settlement layer (L1). #### L2OutputOracle v1.0.0 The submission of output proposals is permissioned to a single account. It is expected that this account will continue to submit output proposals over time to ensure that user withdrawals do not halt. The [L2 output proposer][op-proposer] is expected to submit output roots on a deterministic interval based on the configured `SUBMISSION_INTERVAL` in the `L2OutputOracle`. The larger the `SUBMISSION_INTERVAL`, the less often L1 transactions need to be sent to the `L2OutputOracle` contract, but L2 users will need to wait a bit longer for an output root to be included in L1 (the settlement layer) that includes their intention to withdraw from the system. The honest `op-proposer` algorithm assumes a connection to the `L2OutputOracle` contract to know the L2 block number that corresponds to the next output proposal that must be submitted. It also assumes a connection to an `op-node` to be able to query the `optimism_syncStatus` RPC endpoint. ```python import time while True: next_checkpoint_block = L2OutputOracle.nextBlockNumber() rollup_status = op_node_client.sync_status() if rollup_status.finalized_l2.number >= next_checkpoint_block: output = op_node_client.output_at_block(next_checkpoint_block) tx = send_transaction(output) time.sleep(poll_interval) ``` A `CHALLENGER` account can delete multiple output roots by calling the `deleteL2Outputs()` function and specifying the index of the first output to delete, this will also delete all subsequent outputs. ### L2 Output Commitment Construction The `output_root` is a 32 byte string, which is derived based on the a versioned scheme: ```pseudocode output_root = keccak256(version_byte || payload) ``` where: 1. `version_byte` (`bytes32`) a simple version string which increments anytime the construction of the output root is changed. 2. `payload` (`bytes`) is a byte string of arbitrary length. In the initial version of the output commitment construction, the version is `bytes32(0)`, and the payload is defined as: ```pseudocode payload = state_root || withdrawal_storage_root || latest_block_hash ``` where: 1. The `latest_block_hash` (`bytes32`) is the block hash for the latest L2 block. 2. The `state_root` (`bytes32`) is the Merkle-Patricia-Trie ([MPT][g-mpt]) root of all execution-layer accounts. This value is frequently used and thus elevated closer to the L2 output root, which removes the need to prove its inclusion in the pre-image of the `latest_block_hash`. This reduces the merkle proof depth and cost of accessing the L2 state root on L1. 3. The `withdrawal_storage_root` (`bytes32`) elevates the Merkle-Patricia-Trie ([MPT][g-mpt]) root of the [Message Passer contract](../bridging/withdrawals.md#the-l2tol1messagepasser-contract) storage. Instead of making an MPT proof for a withdrawal against the state root (proving first the storage root of the L2toL1MessagePasser against the state root, then the withdrawal against that storage root), we can prove against the L2toL1MessagePasser's storage root directly, thus reducing the verification cost of withdrawals on L1. After Isthmus hard fork, the `withdrawal_storage_root` is present in the [block header as `withdrawalsRoot`][header-withdrawals-root] and can be used directly, instead of computing the storage root of the L2toL1MessagePasser contract. Similarly, if Isthmus hard fork is active at the genesis block, the `withdrawal_storage_root` is present in the [block header as `withdrawalsRoot`][header-withdrawals-root]. ### L2 Output Oracle Smart Contract L2 blocks are produced at a constant rate of `L2_BLOCK_TIME` (2 seconds). A new L2 output MUST be appended to the chain once per `SUBMISSION_INTERVAL` which is based on a number of blocks. The exact number is yet to be determined, and will depend on the design of the fault proving game. The L2 Output Oracle contract implements the following interface: ```solidity /** * @notice The number of the first L2 block recorded in this contract. */ uint256 public startingBlockNumber; /** * @notice The timestamp of the first L2 block recorded in this contract. */ uint256 public startingTimestamp; /** * @notice Accepts an L2 outputRoot and the timestamp of the corresponding L2 block. The * timestamp must be equal to the current value returned by `nextTimestamp()` in order to be * accepted. * This function may only be called by the Proposer. * * @param _l2Output The L2 output of the checkpoint block. * @param _l2BlockNumber The L2 block number that resulted in _l2Output. * @param _l1Blockhash A block hash which must be included in the current chain. * @param _l1BlockNumber The block number with the specified block hash. */ function proposeL2Output( bytes32 _l2Output, uint256 _l2BlockNumber, bytes32 _l1Blockhash, uint256 _l1BlockNumber ) /** * @notice Deletes all output proposals after and including the proposal that corresponds to * the given output index. Only the challenger address can delete outputs. * * @param _l2OutputIndex Index of the first L2 output to be deleted. All outputs after this * output will also be deleted. */ function deleteL2Outputs(uint256 _l2OutputIndex) external /** * @notice Computes the block number of the next L2 block that needs to be checkpointed. */ function nextBlockNumber() public view returns (uint256) ``` #### Configuration The `startingBlockNumber` must be at least the number of the first Bedrock block. The `startingTimestamp` MUST be the same as the timestamp of the start block. The first `outputRoot` proposed will thus be at height `startingBlockNumber + SUBMISSION_INTERVAL` ### Security Considerations #### L1 Reorgs If the L1 has a reorg after an output has been generated and submitted, the L2 state and correct output may change leading to a faulty proposal. This is mitigated against by allowing the proposer to submit an L1 block number and hash to the Output Oracle when appending a new output; in the event of a reorg, the block hash will not match that of the block with that number and the call will revert. ## AnchorStateRegistry ### Overview The `AnchorStateRegistry` was designed as a registry where `DisputeGame` contracts could store and register their results so that these results could be used as the starting states for new `DisputeGame` instances. These starting states, called "anchor states", allow new `DisputeGame` contracts to use a newer starting state to bound the size of the execution trace for any given game. We are generally aiming to shift the `AnchorStateRegistry` to act as a unified source of truth for the validity of `DisputeGame` contracts and their corresponding root claims. This specification corresponds to the first iteration of the `AnchorStateRegistry` that will move us in this direction. ### Definitions #### Dispute Game > See [Fault Dispute Game](fault-dispute-game.md) A Dispute Game is a smart contract that makes a determination about the validity of some claim. In the context of Base, the claim is generally assumed to be a claim about the value of an output root at a given L2 block height. We assume that all Dispute Game contracts using the same AnchorStateRegistry contract are arguing over the same underlying state/claim structure. #### Respected Game Type The `AnchorStateRegistry` contract defines a **Respected Game Type** which is the Dispute Game type that is considered to be the correct by the `AnchorStateRegistry` and, by extension, other contracts that may rely on the assertions made within the `AnchorStateRegistry`. The Respected Game Type is, in a more general sense, a game type that the system believes will resolve correctly. For now, the `AnchorStateRegistry` only allows a single Respected Game Type. #### Dispute Game Finality Delay (Airgap) The **Dispute Game Finality Delay** or **Airgap** is the amount of time that must elapse after a game resolves before the game's result is considered "final". #### Registered Game A Dispute Game is considered to be a **Registered Game** if the game contract was created by the system's `DisputeGameFactory` contract. #### Respected Game A Dispute Game is considered to be a **Respected Game** if the game contract's game type **was** the Respected Game Type defined by the `AnchorStateRegistry` contract at the time of the game's creation. Games that are not Respected Games cannot be used as an Anchor Game. See [Respected Game Type](#respected-game-type) for more information. #### Blacklisted Game A Dispute Game is considered to be a **Blacklisted Game** if the game contract's address is marked as blacklisted inside of the `AnchorStateRegistry` contract. #### Retirement Timestamp The **Retirement Timestamp** is a timestamp value maintained within the `AnchorStateRegistry` that can be used to invalidate games. Games with a creation timestamp less than or equal to the Retirement Timestamp are automatically considered to be invalid. The RetirementTimestamp has the effect of retiring all games created before the specific transaction in which the retirement timestamp was set. This includes all games created in the same block as the transaction that set the Retirement Timestamp. We acknowledge the edge-case that games created in the same block *after* the Retirement Timestamp was set will be considered Retired Games even though they were technically created "after" the Retirement Timestamp was set. #### Retired Game A Dispute Game is considered to be a **Retired Game** if the game contract was created with a timestamp less than or equal to the [Retirement Timestamp](#retirement-timestamp). #### Proper Game A Dispute Game is considered to be a **Proper Game** if it has not been invalidated through any of the mechanisms defined by the `AnchorStateRegistry` contract. A Proper Game is, in a sense, a "clean" game that exists in the set of games that are playing out correctly in a bug-free manner. A Dispute Game can be a Proper Game even if it has not yet resolved or resolves in favor of the Challenger. A Dispute Game that is **NOT** a Proper Game can also be referred to as an **Improper Game** for brevity. A Dispute Game can go from being a Proper Game to later *not* being an **Improper Game** if it is invalidated by being [blacklisted](#blacklisted-game) or [retired](#retired-game). **ALL** Dispute Games **TEMPORARILY** become Improper Games while the Pause Mechanism is active. However, this is a *temporary* condition such that Registered Games that are not invalidated by [blacklisting](#blacklisted-game) or [retirement](#retired-game) will become Proper Games again once the pause is lifted. The Pause Mechanism is therefore a way to *temporarily* prevent Dispute Games from being used by consumers like the `OptimismPortal` while relevant parties coordinate the use of some other invalidation mechanism. A Game is considered to be a Proper Game if all of the following are true: * The game is a [Registered Game](#registered-game) * The game is **NOT** a [Blacklisted Game](#blacklisted-game) * The game is **NOT** a [Retired Game](#retired-game) * The Pause Mechanism is not active #### Resolved Game A Dispute Game is considered to be a **Resolved Game** if the game has resolved a result in favor of either the Challenger or the Defender. #### Finalized Game A Dispute Game is considered to be a **Finalized Game** if all of the following are true: * The game is a [Resolved Game](#resolved-game) * The game resolved a result more than [Dispute Game Finality Delay](#dispute-game-finality-delay-airgap) seconds ago as defined by the `disputeGameFinalityDelaySeconds` variable in the `AnchorStateRegistry` contract. #### Valid Claim A Dispute Game is considered to have a **Valid Claim** if all of the following are true: * The game is a [Proper Game](#proper-game) * The game is a [Respected Game](#respected-game) * The game is a [Finalized Game](#finalized-game) * The game resolved in favor of the root claim (i.e., in favor of the Defender) #### Truly Valid Claim A Truly Valid Claim is a claim that accurately represents the correct root for the L2 block height on the L2 system as would be reported by a perfect oracle for the L2 system state. #### Starting Anchor State The Starting Anchor State is the anchor state (root and L2 block height) that is used as the starting state for new Dispute Game instances when there is no current Anchor Game. The Starting Anchor State is set during the initialization of the `AnchorStateRegistry` contract. #### Anchor Game The Anchor Game is a game whose claim is used as the starting state for new Dispute Game instances. A Game can become the Anchor Game if it has a Valid Claim and the claim's L2 block height is greater than the claim of the current Anchor Game. If there is no current Anchor Game, a Game can become the Anchor Game if it has a Valid Claim and the claim's L2 block height is greater than the current Starting Anchor State's L2 block height. After a Game becomes the Anchor Game, it will remain the Anchor Game until it is replaced by some other Game. A Game that is retired after becoming the Anchor Game will remain the Anchor Game. #### Anchor Root The Anchor Root is the root and L2 block height that is used as the starting state for new Dispute Game instances. The value of the Anchor Root is the Starting Anchor State if no Anchor Game has been set. Otherwise, the value of the Anchor Root is the root and L2 block height of the current Anchor Game. ### Assumptions > **NOTE:** Assumptions are utilized by specific invariants and do not apply globally. Invariants > typically only rely on a subset of the following assumptions. Different invariants may rely on > different assumptions. Refer to individual invariants for their dependencies. #### aASR-001: Dispute Game contracts properly report important properties We assume that the `FaultDisputeGame` and `PermissionedDisputeGame` contracts properly and faithfully report the following properties: * Game type * L2 block number * Root claim value * Game extra data * Creation timestamp * Resolution timestamp * Resolution result * Whether the game was the respected game type at creation We also specifically assume that the game creation timestamp and the resolution timestamp are not set to values in the future. ##### Mitigations * Existing audit on the `FaultDisputeGame` contract * Integration testing #### aASR-002: DisputeGameFactory properly reports its created games We assume that the `DisputeGameFactory` contract properly and faithfully reports the games it has created. ##### Mitigations * Existing audit on the `DisputeGameFactory` contract * Integration testing #### aASR-003: Incorrectly resolving games will be invalidated before they have Valid Claims We assume that any games that are resolved incorrectly will be invalidated either by [blacklisting](#blacklisted-game) or by [retirement](#retired-game) BEFORE they are considered to have [Valid Claims](#valid-claim). Proper Games that resolve in favor the Defender will be considered to have Valid Claims after the [Dispute Game Finality Delay](#dispute-game-finality-delay-airgap) has elapsed UNLESS the Pause Mechanism is active. Therefore, in the absence of the Pause Mechanism, parties responsible for game invalidation have exactly the Dispute Game Finality Delay to invalidate a withdrawal after it resolves incorrectly. If the Pause Mechanism is active, then any incorrectly resolving games must be invalidated before the pause is deactivated. ##### Mitigations * Stakeholder incentives / processes * Incident response plan * Monitoring ### Invariants #### iASR-001: Games are represented as Proper Games accurately When asked if a game is a Proper Game, the `AnchorStateRegistry` must serve a response that is identical to the response that would be given by a perfect oracle for this query. ##### Impact **Severity: High** If this invariant is broken, the Anchor Game could be set to an incorrect value, which would cause future Dispute Game instances to use an incorrect starting state. This would lead games to resolve incorrectly. Additionally, this could cause a `FaultDisputeGame` to incorrectly choose the wrong bond refunding mode. ##### Dependencies * [aASR-001](#aasr-001-dispute-game-contracts-properly-report-important-properties) * [aASR-002](#aasr-002-disputegamefactory-properly-reports-its-created-games) * [aASR-003](#aasr-003-incorrectly-resolving-games-will-be-invalidated-before-they-have-valid-claims) #### iASR-002: All Valid Claims are Truly Valid Claims When asked if a game has a Valid Claim, the `AnchorStateRegistry` must serve a response that is identical to the response that would be given by a perfect oracle for this query. However, it is important to note that we do NOT say that all Truly Valid Claims are Valid Claims. It is possible that a game has a Truly Valid Claim but the `AnchorStateRegistry` reports that the claim is not a Valid Claim. This permits the `AnchorStateRegistry` and system-wide safety net actions to err on the side of caution. In a nutshell, the set of Valid Claims is a subset of the set of Truly Valid Claims. ##### Impact **Severity: Critical** If this invariant is broken, then any component that relies on the correctness of this function may allow actions to occur based on invalid dispute games. Some examples of strong negative impact are: * Invalid Dispute Game could be used as the Anchor Game, which would cause future Dispute Game instances to use an incorrect starting state. This would lead these games to resolve incorrectly. **(HIGH)** * Invalid Dispute Game could be used to prove or finalize withdrawals within the `OptimismPortal` contract. This would lead to a critical vulnerability in the bridging system. **(CRITICAL)** ##### Dependencies * [aASR-001](#aasr-001-dispute-game-contracts-properly-report-important-properties) * [aASR-002](#aasr-002-disputegamefactory-properly-reports-its-created-games) * [aASR-003](#aasr-003-incorrectly-resolving-games-will-be-invalidated-before-they-have-valid-claims) #### iASR-003: The Anchor Game is a Truly Valid Claim We require that the Anchor Game is a Truly Valid Claim. This makes it possible to use the Anchor Game as the starting state for new Dispute Game instances. Notably, given the allowance that not all Truly Valid Claims are Valid Claims, this invariant does not imply that the Anchor Game is a Valid Claim. We allow retired games to be used as the Anchor Game because the retirement mechanism is broad in a way that commonly causes Truly Valid Claims to no longer be considered Valid Claims. We allow both blacklisted games and retired games to remain the Anchor Game if they are already the Anchor Game. This is because we assume games that become the Anchor Game would be invalidated *before* becoming the Anchor Game. After the game becomes the Anchor Game, it would be possible to use that game to execute withdrawals from the system, which would already be a critical bug in the system. ##### Impact **Severity: High** If this invariant is broken, an invalid Anchor Game could be used as the starting state for new Dispute Game instances. This would lead games to resolve incorrectly. ##### Dependencies * [aASR-001](#aasr-001-dispute-game-contracts-properly-report-important-properties) * [aASR-002](#aasr-002-disputegamefactory-properly-reports-its-created-games) * [aASR-003](#aasr-003-incorrectly-resolving-games-will-be-invalidated-before-they-have-valid-claims) #### iASR-004: Invalidation functions operate correctly We require that the blacklisting and retirement functions operate correctly. Games that are blacklisted must not be used as the Anchor Game, must not be considered Valid Games, and must not be usable to prove or finalize withdrawals. Any game created before a transaction that updates the retirement timestamp must not be set as the Anchor Game, must not be considered Valid Games, and must not be usable to prove or finalize withdrawals. ##### Impact **Severity: High/Critical** If this invariant is broken, the Anchor Game could be set to an incorrect value, which would cause future Dispute Game instances to use an incorrect starting state. This would lead games to resolve incorrectly and would be considered a High Severity issue. Issues that would allow users to finalize withdrawals with invalidated games would be considered Critical Severity. ##### Dependencies * [aASR-003](#aasr-003-incorrectly-resolving-games-will-be-invalidated-before-they-have-valid-claims) #### iASR-005: The Anchor Game is recent enough to be fault provable We require that the Anchor Game corresponds to an L2 block with an L1 origin timestamp that is no older than 6 months from the current timestamp. This time constraint is necessary because the fault proof VM must walk backwards through L1 blocks to verify derivation, and processing 7 months worth of L1 blocks approaches the maximum time available to challengers in the dispute game process. ##### Impact **Severity: High** If this invariant is broken, challengers will be unable to participate in fault proofs within the allotted response time, and resolution would require intervention from the Proxy Admin Owner. ### Function Specification #### constructor * MUST set the value of the [Dispute Game Finality Delay](#dispute-game-finality-delay-airgap). #### initialize * MUST only be callable by the ProxyAdmin or its owner. * MUST only be triggerable once. * MUST set the value of the `SystemConfig` contract that stores the address of the Guardian. * MUST set the value of the `DisputeGameFactory` contract that creates Dispute Game instances. * MUST set the value of the [Starting Anchor State](#starting-anchor-state). * MUST set the value of the initial [Respected Game Type](#respected-game-type). * MUST set the value of the [Retirement Timestamp](#retirement-timestamp) to the current block timestamp. NOTE that this is a safety mechanism that invalidates all existing Dispute Game contracts to support the safe transition away from the `OptimismPortal` as the source of truth for game validity. In this way, the `AnchorStateRegistry` does not need to consider the state of the legacy blacklisting/retirement mechanisms within the `OptimismPortal` and starts from a clean slate. #### paused Returns the value of `paused()` from the `SystemConfig` contract. #### respectedGameType Returns the value of the currently [Respected Game Type](#respected-game-type). #### retirementTimestamp Returns the value of the current [Retirement Timestamp](#retirement-timestamp). #### disputeGameFinalityDelaySeconds Returns the value of the [Dispute Game Finality Delay](#dispute-game-finality-delay-airgap). #### setRespectedGameType Permits the Guardian role to set the [Respected Game Type](#respected-game-type). * MUST revert if called by any address other than the Guardian. * MUST update the respected game type with the provided type. * MUST emit an event showing that the game type was updated. #### updateRetirementTimestamp Permits the Guardian role to update the [Retirement Timestamp](#retirement-timestamp). * MUST revert if called by any address other than the Guardian. * MUST set the retirement timestamp to the current block timestamp. * MUST emit an event showing that the retirement timestamp was updated. #### blacklistDisputeGame Permits the Guardian role to [blacklist](#blacklisted-game) a Dispute Game. * MUST revert if called by any address other than the Guardian. * MUST mark the game as blacklisted. * MUST emit an event showing that the game was blacklisted. #### isGameRegistered Determines if a game is a Registered Game. * MUST return `true` if and only if the game was created by the system's `DisputeGameFactory` contract AND the game's `AnchorStateRegistry` address matches the address of this contract. #### isGameRespected Determines if a game is a Respected Game. * MUST return `true` if and only if the game's game type was the respected game type defined by the `AnchorStateRegistry` contract at the time of the game's creation as per a call to `AnchorStateRegistry.respectedGameType()`. #### isGameBlacklisted Determines if a game is a Blacklisted Game. * MUST return `true` if and only if the game's address is marked as blacklisted inside of the `AnchorStateRegistry` contract. #### isGameRetired Determines if a game is a Retired Game. * MUST return `true` if and only if the game was created before or at the retirement timestamp defined by the `AnchorStateRegistry` contract as per a call to `AnchorStateRegistry.retirementTimestamp()`. We check for less than or equal to the current retirement timestamp to prevent games from being created in the same block but before the transaction in which the retirement timestamp was set. Note that this has the side effect of also invalidating any games created in the same block *after* the retirement timestamp was set but this is an acceptable tradeoff. #### isGameProper Determines if a game is a Proper Game. * MUST return `true` if and only if `isGameRegistered(game)` is `true`, `isGameBlacklisted(game)` and `isGameRetired(game)` are both `false`, and `paused()` is `false`. #### isGameResolved Determines if a game is a Resolved Game. * MUST return `true` if and only if the game has resolved a result in favor of either the Challenger or the Defender as determined by the `FaultDisputeGame.status()` function. #### isGameFinalized Determines if a game is a Finalized Game. * MUST return `true` if and only if `isGameResolved(game)` and the game has resolved a result more than the airgap delay seconds ago as defined by the `disputeGameFinalityDelaySeconds` variable in the `AnchorStateRegistry` contract. #### isGameClaimValid Determines if a game has a Valid Claim. * MUST return `true` if and only if `isGameProper(game)` is `true`, `isGameRespected(game)` is `true`, `isGameFinalized(game)` is `true`, and the game resolved in favor of the root claim (i.e., in favor of the Defender). #### getAnchorRoot Retrieves the current anchor root. * MUST return the root hash and L2 block height of the current anchor state. #### anchors Legacy function. Accepts a game type as a parameter but does not use it. * MUST return the current value of `getAnchorRoot()`. #### setAnchorState Allows any address to attempt to update the Anchor Game with a new Game as input. * MUST revert if the provided game does not have a Valid Claim for any reason. * MUST revert if the provided game corresponds to an L2 block height that is less than or equal to the current anchor state's L2 block height. * MUST otherwise update the anchor state to match the game's result. ## Bond Incentives ### Overview Bonds is an add-on to the core [Fault Dispute Game](fault-dispute-game.md). The core game mechanics are designed to ensure honesty as the best response to winning subgames. By introducing financial incentives, Bonds makes it worthwhile for honest challengers to participate. Without the bond reward incentive, the FDG will be too costly for honest players to participate in given the cost of verifying and making claims. Implementations may allow the FDG to directly receive bonds, or delegate this responsibility to another entity. Regardless, there must be a way for the FDG to query and distribute bonds linked to a claim. Bonds are integrated into the FDG in two areas: * Moves * Subgame Resolution ### Moves Moves must be adequately bonded to be added to the FDG. This document does not specify a scheme for determining the minimum bond requirement. FDG implementations should define a function computing the minimum bond requirement with the following signature: ```solidity function getRequiredBond(Position _movePosition) public pure returns (uint256 requiredBond_) ``` As such, attacking or defending requires a check for the `getRequiredBond()` amount against the bond attached to the move. To incentivize participation, the minimum bond should cover the cost of a possible counter to the move being added. Thus, the minimum bond depends only on the position of the move that's added. ### Subgame Resolution If a subgame root resolves incorrectly, then its bond is distributed to the **leftmost claimant** that countered it. This creates an incentive to identify the earliest point of disagreement in an execution trace. The subgame root claimant gets back its bond iff it resolves correctly. At maximum game depths, where a claimant counters a bonded claim via `step`, the bond is instead distributed to the account that successfully called `step`. #### Leftmost Claim Incentives There exists defensive positions that cannot be countered, even if they hold invalid claims. These positions are located on the same level as honest claims, but situated to its right (i.e. its gindex > honest claim's). An honest challenger can always successfully dispute any sibling claims not positioned to the right of an honest claim. The leftmost payoff rule encourages such disputes, ensuring only one claim is leftmost at correct depths. This claim will be the honest one, and thus bond rewards will be directed exclusively to honest claims. ### Fault Proof Mainnet Incentives This section describes the specific bond incentives to be used for the Fault Proof Mainnet launch of the Base fault proof system. #### Authenticated Roles | Name | Description | | ------------ | ----------------------------------------------------------------------------------------------------- | | Guardian | Role responsible for blacklisting dispute game contracts and changing the respected dispute game type | | System Owner | Role that owns the `ProxyAdmin` contract that in turn owns most `Proxy` contracts within Base | #### Base Fee Assumption FPM bonds are to assume a fixed 200 Gwei base fee. Future iterations of the fault proof may include a dynamic base fee calculation. For the moment, we suppose that the `Guardian` address may account for increased average base fees by updating the `OptimismPortal` contract to a new respected game type with a higher assumed base fee. #### Bond Scaling FPM bonds are priced in the amount of gas that they are intended to cover. Bonds start at the very first depth of the game at a baseline of `400_000` gas. The `400_000` value is chosen as a deterrence amount that is approximately double the cost to respond at the top level. Bonds scale up to a value of `300_000_000` gas, a value chosen to cover approximately double the cost of a max-size Large Preimage Proposal. We use a multiplicative scaling mechanism to guarantee that the ratio between bonds remains constant. We determine the multiplier based on the proposed `MAX_DEPTH` of 73. We can use the formula `x = (300_000_000 / 400_000) ** (1 / 73)` to determine that `x = 1.09493`. At each depth `N`, the amount of gas charged is therefore `400_000 * (1.09493 ** N)` Below is a diagram demonstrating this curve for a max depth of 73. ![bond scaling curve](https://github.com/ethereum-optimism/specs/assets/14298799/b381037b-193d-42c5-9a9c-9cc5f43b255f) #### Required Bond Formula Applying the [Base Fee Assumption](#base-fee-assumption) and [Bond Scaling](#bond-scaling) specifications, we have a `getRequiredBond` function: ```python def get_required_bond(position): assumed_gas_price = 200 gwei base_gas_charged = 400_000 gas_charged = 400_000 * (1.09493 ** position.depth) return gas_charged * assumed_gas_price ``` #### Other Incentives There are other costs associated with participating in the game, including operating a challenger agent and the opportunity cost of locking up capital in the dispute game. While we do not explicitly create incentives to cover these costs, we assume that the current bond rewards, based on this specification, are enough as a whole to cover all other costs of participation. ### Game Finalization After the game is resolved, claimants must wait for the [AnchorStateRegistry's `isGameFinalized()`](anchor-state-registry.md#isgamefinalized) to return `true` before they can claim their bonds. This implies a wait period of at least the `disputeGameFinalityDelaySeconds` variable from the `OptimismPortal` contract. After the game is finalized, bonds can be distributed. #### Bond Distribution Mode The FDG will in most cases distribute bonds to the winners of the game after it is resolved and finalized, but in special cases will refund the bonds to the original depositor. ##### Normal Mode In normal mode, the FDG will distribute bonds to the winners of the game after it is resolved and finalized. ##### Refund Mode In refund mode, the FDG will refund the bonds to the original depositor. #### Game Closure The `FaultDisputeGame` contract can be closed after finalization via the `closeGame()` function. `closeGame` must do the following: 1. Verify the game is resolved and finalized according to the Anchor State Registry 2. Attempt to set this game as the new anchor game. 3. Determine the bond distribution mode based on whether the [AnchorStateRegistry's `isGameProper()`](anchor-state-registry.md#isgameproper) returns `true`. 4. Emit a `GameClosed` event with the chosen distribution mode. #### Claiming Credit There is a 2-step process to claim credit. First, `claimCredit(address claimant)` should be called to unlock the credit from the [DelayedWETH](#delayedweth) contract. After DelayedWETH's [delay period](#delay-period) has passed, `claimCredit` should be called again to withdraw the credit. The `claimCredit(address claimant)` function must do the following: * Call `closeGame()` to determine the distribution mode if not already closed. * In NORMAL mode: Distribute credit from the standard `normalModeCredit` mapping. * In REFUND mode: Distribute credit from the `refundModeCredit` mapping. * If the claimant has not yet unlocked their credit, unlock it by calling `DelayedWETH.unlock(claimant, credit)`. * Claimant must not be able to unlock this credit again. * If the claimant has already unlocked their credit, call `DelayedWETH.withdraw(claimant, credit)` (implying a [delay period](#delay-period)) to withdraw the credit, and set claimant's `credit` balances to 0. #### DelayedWETH `DelayedWETH` is designed to hold the bonded ETH for each [Fault Dispute Game](fault-dispute-game.md). `DelayedWETH` is an extended version of the standard `WETH` contract that introduces a delayed unwrap mechanism that allows an owner address to function as a backstop in the case that a Fault Dispute Game would incorrectly distribute bonds. `DelayedWETH` is modified from `WETH` as follows: * `DelayedWETH` is an upgradeable proxy contract. * `DelayedWETH` has an `owner()` address. We typically expect this to be set to the `System Owner` address. * `DelayedWETH` has a `delay()` function that returns a period of time that withdrawals will be delayed. * `DelayedWETH` has an `unlock(guy,wad)` function that modifies a mapping called `withdrawals` keyed as `withdrawals[msg.sender][guy] => WithdrawalRequest` where `WithdrawalRequest` is `struct Withdrawal Request { uint256 amount, uint256 timestamp }`. When `unlock` is called, the timestamp for `withdrawals[msg.sender][guy]` is set to the current timestamp and the amount is increased by the given amount. * `DelayedWETH` modifies the `WETH.withdraw` function such that an address *must* provide a "sub-account" to withdraw from. The function signature becomes `withdraw(guy,wad)`. The function retrieves `withdrawals[msg.sender][guy]` and checks that the current `block.timestamp` is greater than the timestamp on the withdrawal request plus the `delay()` seconds and reverts if not. It also confirms that the amount being withdrawn is less than the amount in the withdrawal request. Before completing the withdrawal, it reduces the amount contained within the withdrawal request. The original `withdraw(wad)` function becomes an alias for `withdraw(msg.sender, wad)`. `withdraw(guy,wad)` will not be callable when `SuperchainConfig.paused()` is `true`. * `DelayedWETH` has a `hold(guy,wad)` function that allows the `owner()` address to, for any holder, give itself an allowance and immediately `transferFrom` that allowance amount to itself. * `DelayedWETH` has a `hold(guy)` function that allows the `owner()` address to, for any holder, give itself a full allowance of the holder's balance and immediately `transferFrom` that amount to itself. * `DelayedWETH` has a `recover()` function that allows the `owner()` address to recover any amount of ETH from the contract. ##### Sub-Account Model This specification requires that withdrawal requests specify "sub-accounts" that these requests correspond to. This takes the form of requiring that `unlock` and `withdraw` both take an `address guy` parameter as input. By requiring this extra input, withdrawals are separated between accounts and it is always possible to see how much WETH a specific end-user of the `FaultDisputeGame` can withdraw at any given time. It is therefore possible for the `DelayedWETH` contract to account for all bug cases within the `FaultDisputeGame` as long as the `FaultDisputeGame` always passes the correct address into `withdraw`. ##### Delay Period We propose a delay period of 7 days for Base. 7 days provides sufficient time for the `owner()` of the `DelayedWETH` contract to act even if that owner is a large multisig that requires action from many different members over multiple timezones. ##### Integration `DelayedWETH` is expected to be integrated into the Fault Dispute Game as follows: * When `FaultDisputeGame.initialize` is triggered, `DelayedWETH.deposit{value: msg.value}()` is called. * When `FaultDisputeGame.move` is triggered, `DelayedWETH.deposit{value: msg.value}()` is called. * When `FaultDisputeGame.resolveClaim` is triggered, the game will add to the claimant's internal credit balance. * When `FaultDisputeGame.claimCredit` is triggered, `DelayedWETH.withdraw(recipient, credit)` is called. ```mermaid sequenceDiagram participant U as User participant FDG as FaultDisputeGame participant DW as DelayedWETH U->>FDG: initialize() FDG->>DW: deposit{value: msg.value}() Note over DW: FDG gains balance in DW loop move by Users U->>FDG: move() FDG->>DW: deposit{value: msg.value}() Note over DW: Increases FDG balance in DW end loop resolveClaim by Users U->>FDG: resolveClaim() FDG->>FDG: Add to claimant credit end loop Initial claimCredit call by Users U->>FDG: claimCredit() FDG->>DW: unlock(recipient, bond) end loop Subsequent claimCredit call by Users U->>FDG: claimCredit() FDG->>DW: withdraw(recipient, credit) Note over DW: Checks timer/amount for recipient DW->>FDG: Transfer claim to FDG FDG->>U: Transfer claim to User end ``` ## Bridge Integration [g-l2-proposal]: ../../../reference/glossary.md#l2-output-root-proposals [fdg]: fault-dispute-game.md ### Overview With fault proofs, the withdrawal path changes such that withdrawals submitted to the `OptimismPortal` are proven against [output proposals][g-l2-proposal] submitted as a [`FaultDisputeGame`][fdg] prior to being finalized. Output proposals are now finalized whenever a dispute game resolves in their favor. ### Legacy Semantics The `OptimismPortal` uses the `L2OutputOracle` in the withdrawal path of the rollup to allow users to prove the presence of their withdrawal inside of the `L2ToL1MessagePasser` account storage root, which can be retrieved by providing a preimage to an output root in the oracle. The oracle currently holds a list of all L2 outputs proposed to L1 by a permissioned PROPOSER key. The list in the contract has the following properties: * It must always be sorted by the L2 Block Number that the output proposal is claiming it corresponds to. * All outputs in the list that are > `FINALIZATION_PERIOD_SECONDS` old are considered "finalized." The separator between unfinalized/finalized outputs moves forwards implicitly as time passes. ![legacy-l2oo-list](/static/assets/legacy-l2oo-list.png) Currently, if there is a faulty output proposed by the permissioned `PROPOSER` key, a separate permissioned `CHALLENGER` key may intervene. Note that the `CHALLENGER` role effectively has god-mode privileges, and can currently act without proving that the outputs they're deleting are indeed incorrect. By deleting an output proposal, the challenger also deletes all output proposals in front of it. With the upgrade to the Fault Proof Alpha Chad system, output proposals are no longer sent to the `L2OutputOracle`, but to the `DisputeGameFactory` in order to be fault proven. In contrast to the L2OO, an incorrect output proposal is not deleted, but proven to be incorrect. The semantics of finalization timelines and the definition of a "finalized" output proposal also change. Since the DisputeGameFactory fulfills the same role as the L2OutputOracle in a post fault proofs world by tracking proposed outputs, and the L2OO's semantics are incompatible with the new system, the L2OO is no longer required. ### FPAC `OptimismPortal` Mods Specification #### Roles - `OptimismPortal` * `Guardian`: Permissioned actor able to pause the portal, blacklist dispute games, and change the `RESPECTED_GAME_TYPE`. #### New `DeployConfig` Variables | Name | Description | | ------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | `DISPUTE_GAME_FINALITY_DELAY_SECONDS` | The amount of time given to the `Guardian` role to blacklist a resolved dispute game before any withdrawals proven against it can be finalized, in case of system failure. | | `PROOF_MATURITY_DELAY_SECONDS` | Formerly `FINALIZATION_PERIOD_SECONDS` in the `L2OutputOracle`, defines the duration that must pass between proving and finalizing a withdrawal. | | `RESPECTED_GAME_TYPE` | The dispute game type that the portal uses for the withdrawal path. | #### Data Structures Withdrawals are now proven against dispute games, which have immutable "root claims" representing the output root being proposed. The `ProvenWithdrawal` struct is now defined as: ```solidity /// @notice Represents a proven withdrawal. /// @custom:field disputeGameProxy The address of the dispute game proxy that the withdrawal was proven against. /// @custom:field timestamp Timestamp at which the withdrawal was proven. struct ProvenWithdrawal { IDisputeGame disputeGameProxy; uint64 timestamp; } ``` #### State Layout ##### Legacy Spacers Spacers should be added at the following storage slots in the `OptimismPortal` so that they may not be reused: | Slot | Description | | ---- | ---------------------------------------------------------------------------------------------------------------------------------------- | | `52` | Legacy `provenWithdrawals` mapping. Withdrawals proven against the `L2OutputOracle`'s output proposals will be deleted upon the upgrade. | | `54` | Legacy `L2OutputOracle` address. | ##### New State Variables **`DisputeGameFactory` address** ```solidity /// @notice Address of the DisputeGameFactory. /// @custom:network-specific DisputeGameFactory public disputeGameFactory; ``` **Respected Game Type** ```solidity /// @notice The respected game type of the `OptimismPortal`. /// Can be changed by Guardian. GameType public respectedGameType; ``` **Respected Game Type Updated Timestamp** ```solidity /// @notice The timestamp at which the respected game type was last updated. uint64 public respectedGameTypeUpdatedAt; ``` **New `ProvenWithdrawals` mapping** ```solidity /// @notice A mapping of withdrawal hashes to `ProvenWithdrawal` data. mapping(bytes32 => ProvenWithdrawal) public provenWithdrawals; ``` **Blacklisted `DisputeGame` mapping** ```solidity /// @notice A mapping of dispute game addresses to whether or not they are blacklisted. mapping(IDisputeGame => bool) public disputeGameBlacklist; ``` #### `proveWithdrawalTransaction` modifications Proving a withdrawal transaction now proves against an output root in a dispute game, rather than one in the `L2OutputOracle`. ##### Interface The type signature of the function does not change, but the purpose of the second argument transitions from providing an index within the `L2OutputOracle`'s `l2Outputs` array to an index within the `DisputeGameFactory`'s list of created games. ```solidity /// @notice Proves a withdrawal transaction. /// @param _tx Withdrawal transaction to finalize. /// @param _disputeGameIndex Index of the dispute game to prove the withdrawal against. /// @param _outputRootProof Inclusion proof of the L2ToL1MessagePasser contract's storage root. /// @param _withdrawalProof Inclusion proof of the withdrawal in L2ToL1MessagePasser contract. function proveWithdrawalTransaction( Types.WithdrawalTransaction memory _tx, uint256 _disputeGameIndex, Types.OutputRootProof calldata _outputRootProof, bytes[] calldata _withdrawalProof ) external whenNotPaused; ``` ##### New Invariants - `proveWithdrawalTransaction` **Trusted `GameType`** The `DisputeGameFactory` can create many different types of dispute games, delineated by their `GameType`. The game type of the dispute game fetched from the factory's list at `_disputeGameIndex` must be of type `RESPECTED_GAME_TYPE`. The call should revert on all other game types it encounters. ##### Changed Invariants - `proveWithdrawalTransaction` **Re-proving withdrawals** Users being able to re-prove withdrawals, in special cases, is still necessary to prevent user withdrawals from being bricked. It is kept to protect honest users when they prove their withdrawal inside of a malicious proposal. The timestamp of re-proven withdrawals is still reset. 1. **Old:** Re-proving is allowed if the output root at the proven withdrawal's `l2OutputIndex` changed in the `L2OutputOracle`. 2. **New:** Re-proving is allowed at any time by the user. When a withdrawal is re-proven, its proof maturity delay is reset. #### `finalizeWithdrawalTransaction` modifications Finalizing a withdrawal transaction now references a `DisputeGame` to determine the status of the output proposal that the withdrawal was proven against. ##### New Invariants - `finalizeWithdrawalTransaction` **Trusted `GameType`** The `DisputeGameFactory` can create many different types of dispute games, delineated by their `GameType`. The game type of the dispute game fetched from the factory's list at `_disputeGameIndex` must be of type `RESPECTED_GAME_TYPE`. The call should revert on all other game types it encounters. **Respected Game Type Updated** A withdrawal may never be finalized if the dispute game was created before the respected game type was last updated. **Dispute Game Blacklist** The `Guardian` role can blacklist certain `DisputeGame` addresses in the event of a system failure. If the address of the dispute game that the withdrawal was proven against is present in the `disputeGameBlacklist` mapping, the call should always revert. **Dispute Game Maturity** See ["Air-gap"](#air-gap) ##### Changed Invariants - `finalizeWithdrawalTransaction` **Output Proposal Validity** Instead of checking if the proven withdrawal's output proposal has existed for longer the legacy finalization period, we check if the dispute game has resolved in the root claim's favor. A `FaultDisputeGame` must never be considered to have resolved in the `rootClaim`'s favor unless its `status()` is equal to `DEFENDER_WINS`. #### Air-gap Given it's own section due to it's importance, the air gap is an enforced period of time between a dispute game's resolution and users being able to finalize withdrawals that were proven against its root claim. When the `DisputeGame` resolves globally, it stores the timestamp. The portal's `finalizeWithdrawalTransaction` function asserts that `DISPUTE_GAME_FINALITY_DELAY_SECONDS` have passed since the resolution timestamp before allowing any withdrawals proven against the dispute game to be finalized. Because the `FaultDisputeGame` is a trusted implementation set by the owner of the `DisputeGameFactory`, it is safe to trust that this value is honestly set. ##### Blacklisting `DisputeGame`s A new method is added to assign `DisputeGame`s in the `disputeGameBlacklist` mapping mentioned in ["State Layout"](#state-layout), in the event that a dispute game is detected to have resolved incorrectly. The only actor who may call this function is the `Guardian` role. Blacklisting a dispute game means that no withdrawals proven against it will be allowed to finalize (per the "Dispute Game Blacklist" invariant), and they must re-prove against a new dispute game that resolves correctly. The Portal's guardian role is obligated to blacklist any dispute games that it deems to have resolved incorrectly. Withdrawals proven against a blacklisted dispute game are not prevented from re-proving or being finalized in the future. ##### Blacklisting a full `GameType` In the event of a catastrophic failure, we can upgrade the `OptimismPortal` proxy to an implementation with a different `RESPECTED_GAME_TYPE`. All pending withdrawals that reference a different game type will not be allowed to finalize and must re-prove, due to the "Trusted `GameType`" invariant. This should generally be avoided, but allows for a blanket blacklist of pending withdrawals corresponding to the current `RESPECTED_GAME_TYPE`. Depending on if we're okay with the tradeoffs, this also may be the most efficient way to upgrade the dispute game in the future. #### Proxy Upgrade Upgrading the `OptimismPortal` proxy to an implementation that follows this specification will invalidate all pending withdrawals. This means that all users with pending withdrawals will need to re-prove their withdrawals against an output proposal submitted in the form of a `DisputeGame`. ### Permissioned `FaultDisputeGame` As a fallback to permissioned proposals, a child contract of the `FaultDisputeGame` will be created that has 2 new roles: the `PROPOSER` and a `CHALLENGER` (or set of challengers). Each interaction (`move` \[`attack` / `defend`], `step`, `resolve` / `resolveClaim`, `addLocalData`, etc.) will be permissioned to the `CHALLENGER` key, and the `initialize` function will be permissioned to the `PROPOSER` key. In the event that we'd like to switch back to permissioned proposals, we can change the `RESPECTED_GAME_TYPE` in the `OptimismPortal` to a deployment of the `PermissionedFaultDisputeGame`. #### Roles - `PermissionedDisputeGame` * `PROPOSER` - Actor that can create a `PermissionedFaultDisputeGame` and participate in the games they've created. * `CHALLENGER` - Actor(s) that can participate in a `PermissionedFaultDisputeGame`. #### Modifications **State Layout** 2 new immutables: ```solidity /// @notice The `PROPOSER` role. address public immutable PROPOSER; /// @notice The `CHALLENGER` role. address public immutable CHALLENGER; ``` **Functions** Every function that can mutate state should be overridden to add a check that either: 1. The `msg.sender` has the `CHALLENGER` role. 2. The `msg.sender` has the `PROPOSER` role. If the `msg.sender` does not have either role, the function must revert. The exception is the `initialize` function, which may only be called if the `tx.origin` is the `PROPOSER` role. ## Dispute Game Interface ### Overview A dispute game is played between multiple parties when contesting the truthiness of a claim. In the context of an optimistic rollup, claims are made about the state of the layer two network to enable withdrawals to the layer one. A proposer makes a claim about the layer two state such that they can withdraw and a challenger can dispute the validity of the claim. The security of the layer two comes from the ability of fraudulent withdrawals being able to be disputed. A dispute game interface is defined to allow for multiple implementations of dispute games to exist. If multiple dispute games run in production, it gives a similar security model as having multiple protocol clients, as a bug in a single dispute game will not result in the bug becoming consensus. ### Types For added context, we define a few types that are used in the following snippets. ```solidity /// @notice A `Claim` type represents a 32 byte hash or other unique identifier for a claim about /// a certain piece of information. type Claim is bytes32; /// @notice A custom type for a generic hash. type Hash is bytes32; /// @notice A dedicated timestamp type. type Timestamp is uint64; /// @notice A `GameType` represents the type of game being played. type GameType is uint32; /// @notice A `GameId` represents a packed 4 byte game type, a 8 byte timestamp, and a 20 byte address. /// @dev The packed layout of this type is as follows: /// ┌───────────┬───────────┐ /// │ Bits │ Value │ /// ├───────────┼───────────┤ /// │ [0, 32) │ Game Type │ /// │ [32, 96) │ Timestamp │ /// │ [96, 256) │ Address │ /// └───────────┴───────────┘ type GameId is bytes32; /// @title GameTypes /// @notice A library that defines the IDs of games that can be played. library GameTypes { /// @dev A dispute game type the uses the cannon vm. GameType internal constant CANNON = GameType.wrap(0); /// @dev A dispute game type that performs output bisection and then uses the cannon vm. GameType internal constant OUTPUT_CANNON = GameType.wrap(1); /// @notice A dispute game type that performs output bisection and then uses an alphabet vm. /// Not intended for production use. GameType internal constant OUTPUT_ALPHABET = GameType.wrap(254); /// @notice A dispute game type that uses an alphabet vm. /// Not intended for production use. GameType internal constant ALPHABET = GameType.wrap(255); } /// @notice The current status of the dispute game. enum GameStatus { /// @dev The game is currently in progress, and has not been resolved. IN_PROGRESS, /// @dev The game has concluded, and the `rootClaim` was challenged successfully. CHALLENGER_WINS, /// @dev The game has concluded, and the `rootClaim` could not be contested. DEFENDER_WINS } ``` ### `DisputeGameFactory` Interface The dispute game factory is responsible for creating new `DisputeGame` contracts given a `GameType` and a root `Claim`. Challenger agents listen to the `DisputeGameCreated` events in order to keep up with on-going disputes in the protocol and participate accordingly. A [`clones-with-immutable-args`](https://github.com/Vectorized/solady/blob/main/src/utils/LibClone.sol) factory (originally by @wighawag, but forked and improved by @Vectorized) is used to create Clones. Each `GameType` has a corresponding implementation within the factory, and when a new game is created, the factory creates a clone of the `GameType`'s pre-deployed implementation contract. The `rootClaim` of created dispute games can either be a claim that the creator agrees or disagrees with. This is an implementation detail that is left up to the `IDisputeGame` to handle within its `resolve` function. When the `DisputeGameFactory` creates a new `DisputeGame` contract, it calls `initialize()` on the clone to set up the game. The factory passes immutable arguments to the clone using the CWIA (Clone With Immutable Args) pattern. There are two CWIA layouts depending on whether the game type has implementation args configured: **Standard CWIA Layout** (when `gameArgs[_gameType]` is empty): | Bytes | Description | | ------------- | ---------------------------------- | | \[0, 20) | Game creator address | | \[20, 52) | Root claim | | \[52, 84) | Parent block hash at creation time | | \[84, 84 + n) | Extra data (opaque) | **Extended CWIA Layout** (when `gameArgs[_gameType]` is non-empty): | Bytes | Description | | --------------------- | ---------------------------------- | | \[0, 20) | Game creator address | | \[20, 52) | Root claim | | \[52, 84) | Parent block hash at creation time | | \[84, 88) | Game type | | \[88, 88 + n) | Extra data (opaque) | | \[88 + n, 88 + n + m) | Implementation args (opaque) | The implementation args allow chain-specific configuration to be passed to the game implementation at clone creation time, enabling a single implementation contract to be reused across different chain configurations. ```solidity /// @title IDisputeGameFactory /// @notice The interface for a DisputeGameFactory contract. interface IDisputeGameFactory { /// @notice Emitted when a new dispute game is created /// @param disputeProxy The address of the dispute game proxy /// @param gameType The type of the dispute game proxy's implementation /// @param rootClaim The root claim of the dispute game event DisputeGameCreated(address indexed disputeProxy, GameType indexed gameType, Claim indexed rootClaim); /// @notice Emitted when a new game implementation added to the factory /// @param impl The implementation contract for the given `GameType`. /// @param gameType The type of the DisputeGame. event ImplementationSet(address indexed impl, GameType indexed gameType); /// @notice Emitted when a game type's implementation args are set /// @param gameType The type of the DisputeGame. /// @param args The constructor args for the game type. event ImplementationArgsSet(GameType indexed gameType, bytes args); /// @notice Emitted when a game type's initialization bond is updated /// @param gameType The type of the DisputeGame. /// @param newBond The new bond (in wei) for initializing the game type. event InitBondUpdated(GameType indexed gameType, uint256 indexed newBond); /// @notice Information about a dispute game found in a `findLatestGames` search. struct GameSearchResult { uint256 index; GameId metadata; Timestamp timestamp; Claim rootClaim; bytes extraData; } /// @notice The total number of dispute games created by this factory. /// @return gameCount_ The total number of dispute games created by this factory. function gameCount() external view returns (uint256 gameCount_); /// @notice `games` queries an internal mapping that maps the hash of /// `gameType ++ rootClaim ++ extraData` to the deployed `DisputeGame` clone. /// @dev `++` equates to concatenation. /// @param _gameType The type of the DisputeGame - used to decide the proxy implementation /// @param _rootClaim The root claim of the DisputeGame. /// @param _extraData Any extra data that should be provided to the created dispute game. /// @return proxy_ The clone of the `DisputeGame` created with the given parameters. /// Returns `address(0)` if nonexistent. /// @return timestamp_ The timestamp of the creation of the dispute game. function games( GameType _gameType, Claim _rootClaim, bytes calldata _extraData ) external view returns (IDisputeGame proxy_, Timestamp timestamp_); /// @notice `gameAtIndex` returns the dispute game contract address and its creation timestamp /// at the given index. Each created dispute game increments the underlying index. /// @param _index The index of the dispute game. /// @return gameType_ The type of the DisputeGame - used to decide the proxy implementation. /// @return timestamp_ The timestamp of the creation of the dispute game. /// @return proxy_ The clone of the `DisputeGame` created with the given parameters. /// Returns `address(0)` if nonexistent. function gameAtIndex(uint256 _index) external view returns (GameType gameType_, Timestamp timestamp_, IDisputeGame proxy_); /// @notice `gameImpls` is a mapping that maps `GameType`s to their respective /// `IDisputeGame` implementations. /// @param _gameType The type of the dispute game. /// @return impl_ The address of the implementation of the game type. /// Will be cloned on creation of a new dispute game with the given `gameType`. function gameImpls(GameType _gameType) external view returns (IDisputeGame impl_); /// @notice Returns the required bonds for initializing a dispute game of the given type. /// @param _gameType The type of the dispute game. /// @return bond_ The required bond for initializing a dispute game of the given type. function initBonds(GameType _gameType) external view returns (uint256 bond_); /// @notice Returns the chain-specific configuration arguments for a given game type's implementation. /// @dev These arguments are typically passed to the game implementation during proxy creation using CWIA. /// @param _gameType The type of the dispute game. /// @return args_ The chain-specific configuration arguments. function gameArgs(GameType _gameType) external view returns (bytes memory args_); /// @notice Creates a new DisputeGame proxy contract. /// @param _gameType The type of the DisputeGame - used to decide the proxy implementation. /// @param _rootClaim The root claim of the DisputeGame. /// @param _extraData Any extra data that should be provided to the created dispute game. /// @return proxy_ The address of the created DisputeGame proxy. function create( GameType _gameType, Claim _rootClaim, bytes calldata _extraData ) external payable returns (IDisputeGame proxy_); /// @notice Sets the implementation contract for a specific `GameType`. /// @dev May only be called by the `owner`. /// @param _gameType The type of the DisputeGame. /// @param _impl The implementation contract for the given `GameType`. /// @param _args The chain-specific configuration arguments for this game type's implementation. function setImplementation(GameType _gameType, IDisputeGame _impl, bytes calldata _args) external; /// @notice Sets the bond (in wei) for initializing a game type. /// @dev May only be called by the `owner`. /// @param _gameType The type of the DisputeGame. /// @param _initBond The bond (in wei) for initializing a game type. function setInitBond(GameType _gameType, uint256 _initBond) external; /// @notice Returns a unique identifier for the given dispute game parameters. /// @dev Hashes the concatenation of `gameType . rootClaim . extraData` /// without expanding memory. /// @param _gameType The type of the DisputeGame. /// @param _rootClaim The root claim of the DisputeGame. /// @param _extraData Any extra data that should be provided to the created dispute game. /// @return uuid_ The unique identifier for the given dispute game parameters. function getGameUUID( GameType _gameType, Claim _rootClaim, bytes memory _extraData ) external pure returns (Hash uuid_); /// @notice Finds the `_n` most recent `GameId`'s of type `_gameType` starting at `_start`. If there are less than /// `_n` games of type `_gameType` starting at `_start`, then the returned array will be shorter than `_n`. /// @param _gameType The type of game to find. /// @param _start The index to start the reverse search from. /// @param _n The number of games to find. function findLatestGames( GameType _gameType, uint256 _start, uint256 _n ) external view returns (GameSearchResult[] memory games_); } ``` ### `DisputeGame` Interface The dispute game interface defines a generic, black-box dispute. It exposes stateful information such as the status of the dispute, when it was created, as well as the bootstrap data and dispute type. This interface exposes one state mutating function, `resolve`, which when implemented should deterministically yield an opinion about the `rootClaim` and reflect the opinion by updating the `status` to `CHALLENGER_WINS` or `DEFENDER_WINS`. Clones of the `IDisputeGame`'s `initialize` functions will be called by the `DisputeGameFactory` atomically upon creation. ```solidity /// @title IDisputeGame /// @notice The generic interface for a DisputeGame contract. interface IDisputeGame is IInitializable { /// @notice Emitted when the game is resolved. /// @param status The status of the game after resolution. event Resolved(GameStatus indexed status); /// @notice Returns the timestamp that the DisputeGame contract was created at. /// @return createdAt_ The timestamp that the DisputeGame contract was created at. function createdAt() external view returns (Timestamp createdAt_); /// @notice Returns the timestamp that the DisputeGame contract was resolved at. /// @return resolvedAt_ The timestamp that the DisputeGame contract was resolved at. function resolvedAt() external view returns (Timestamp resolvedAt_); /// @notice Returns the current status of the game. /// @return status_ The current status of the game. function status() external view returns (GameStatus status_); /// @notice Getter for the game type. /// @dev The reference impl should be entirely different depending on the type (fault, validity) /// i.e. The game type should indicate the security model. /// @return gameType_ The type of proof system being used. function gameType() external view returns (GameType gameType_); /// @notice Getter for the creator of the dispute game. /// @dev `clones-with-immutable-args` argument #1 /// @return creator_ The creator of the dispute game. function gameCreator() external pure returns (address creator_); /// @notice Getter for the root claim. /// @dev `clones-with-immutable-args` argument #2 /// @return rootClaim_ The root claim of the DisputeGame. function rootClaim() external pure returns (Claim rootClaim_); /// @notice Getter for the parent hash of the L1 block when the dispute game was created. /// @dev `clones-with-immutable-args` argument #3 /// @return l1Head_ The parent hash of the L1 block when the dispute game was created. function l1Head() external pure returns (Hash l1Head_); /// @notice Getter for the L2 sequence number (typically the L2 block number). /// @dev Extracted from the extra data supplied to the dispute game contract by the creator. /// @return l2SequenceNumber_ The L2 sequence number for this dispute game. function l2SequenceNumber() external pure returns (uint256 l2SequenceNumber_); /// @notice Getter for the extra data. /// @dev `clones-with-immutable-args` argument #4 /// @return extraData_ Any extra data supplied to the dispute game contract by the creator. function extraData() external pure returns (bytes memory extraData_); /// @notice If all necessary information has been gathered, this function should mark the game /// status as either `CHALLENGER_WINS` or `DEFENDER_WINS` and return the status of /// the resolved game. It is at this stage that the bonds should be awarded to the /// necessary parties. /// @dev May only be called if the `status` is `IN_PROGRESS`. /// @return status_ The status of the game after resolution. function resolve() external returns (GameStatus status_); /// @notice A compliant implementation of this interface should return the components of the /// game UUID's preimage provided in the cwia payload. The preimage of the UUID is /// constructed as `keccak256(gameType . rootClaim . extraData)` where `.` denotes /// concatenation. /// @return gameType_ The type of proof system being used. /// @return rootClaim_ The root claim of the DisputeGame. /// @return extraData_ Any extra data supplied to the dispute game contract by the creator. function gameData() external view returns (GameType gameType_, Claim rootClaim_, bytes memory extraData_); /// @notice Returns whether the game type was respected when this game was created. /// @dev Used as a withdrawal finality condition - games created when their type wasn't /// respected cannot be used to finalize withdrawals. /// @return wasRespected_ True if the game type was the respected game type when created. function wasRespectedGameTypeWhenCreated() external view returns (bool wasRespected_); } ``` ## Fault Dispute Game [g-output-root]: ../../../reference/glossary.md#l2-output-root ### Overview The Fault Dispute Game (FDG) is a specific type of [dispute game](dispute-game-interface.md) that verifies the validity of a root claim by iteratively bisecting over [output roots][g-output-root] and execution traces of single block state transitions down to a single instruction step. It relies on a Virtual Machine (VM) to falsify invalid claims made at a single instruction step. Actors, i.e. Players, interact with the game by making claims that dispute other claims in the FDG. Each claim made narrows the range over the entire historical state of L2, until the source of dispute is a single state transition. Once a time limit is reached, the dispute game is *resolved*, based on claims made that are disputed and which aren't, to determine the winners of the game. ### Definitions #### Virtual Machine (VM) This is a state transition function (STF) that takes a *pre-state* and computes the post-state. The VM may access data referenced during the STF and as such, it also accepts a *proof* of this data. Typically, the pre-state contains a commitment to the *proof* to verify the integrity of the data referenced. Mathematically, we define the STF as $VM(S\_i,P\_i)$ where * $S\_i$ is the pre-state * $P\_i$ is an optional proof needed for the transition from $S\_i$ to $S\_{i+1}$. #### PreimageOracle This is a pre-image data store. It is often used by VMs to read external data during its STF. Before successfully executing a VM STF, it may be necessary to preload the PreimageOracle with pertinent data. The method for key-based retrieval of these pre-images varies according to the specific VM. #### Execution Trace An execution trace $T$ is a sequence $(S\_0,S\_1,S\_2,...,S\_n)$ where each $S\_i$ is a VM state and for each $i$, $0 \le i \lt n$, $S\_{i+1} = VM(S\_i, P\_i)$. Every execution trace has a unique starting state, $S\_0$, that's preset to a FDG implementation. We refer to this state as the **ABSOLUTE\_PRESTATE**. #### Claims Claims assert an [output root][g-output-root] or the state of the FPVM at a given instruction. This is represented as a `Hash` type, a `bytes32` representing either an [output root][g-output-root] or a commitment to the last VM state in a trace. A FDG is initialized with an output root that corresponds to the state of L2 at a given L2 block number, and execution trace subgames at `SPLIT_DEPTH + 1` are initialized with a claim that commits to the entire execution trace between two consecutive output roots (a block `n -> n+1` state transition). As we'll see later, there can be multiple claims, committing to different output roots and FPVM states in the FDG. #### Anchor State An anchor state, or anchor output root, is a previous output root that is assumed to be valid. An FDG is always initialized with an anchor state and execution is carried out between this anchor state and the [claimed output root](#claims). FDG contracts pull their anchor state from the [Anchor State Registry](#anchor-state-registry) contract. The initial anchor state for a FDG is the genesis state of the L2. Clients must currently gather L1 data for the window between the anchor state and the claimed state. In order to reduce this L1 data requirement, [claims](#claims) about the state of the L2 become new anchor states when dispute games resolve in their favor. FDG contracts set their anchor states at initialization time so that these updates do not impact active games. #### Anchor State Registry The Anchor State Registry is a registry that the FDG uses to determine its [anchor state](#anchor-state). It also determines if the game is [finalized](anchor-state-registry.md#finalized-game) and ["proper"](anchor-state-registry.md#proper-game) for purposes of [Bond Distribution](bond-incentives.md#game-finalization). See [Anchor State Registry](anchor-state-registry.md) for more details. #### Respected Game Type A Fault Dispute Game must record whether its game type is respected at the time of its creation. See [Respected Game Type](anchor-state-registry.md#respected-game-type) for more details. #### DAG A Directed Acyclic Graph $G = (V,E)$ representing the relationship between claims, where: * $V$ is the set of nodes, each representing a claim. Formally, $V = \{C\_1,C\_2,...,C\_n}$, where $C\_i$ is a claim. * $E$ is the set of *directed* edges. An edge $(C\_i,C\_j)$ exists if $C\_j$ is a direct dispute against $C\_i$ through either an "Attack" or "Defend" [move](#moves). #### Subgame A sub-game is a DAG of depth 1, where the root of the DAG is a `Claim` and the children are `Claim`s that counter the root. A good mental model around this structure is that it is a fundamental dispute between two parties over a single piece of information. These subgames are chained together such that a child within a subgame is the root of its own subgame, which is visualized in the [resolution](#resolution) section. There are two types of sub-games in the fault dispute game: 1. Output Roots 2. Execution Trace Commitments At and above the split depth, all subgame roots correspond to [output roots][g-output-root], or commitments to the full state of L2 at a given L2 block number. Below the split depth, subgame roots correspond to commitments to the fault proof VM's state at a given instruction step. #### Game Tree The Game Tree is a binary tree of positions. Every claim in the DAG references a position in the Game Tree. The Game Tree has a split depth and maximum depth, `SPLIT_DEPTH` and `MAX_GAME_DEPTH` respectively, that are both preset to an FDG implementation. The split depth defines the maximum depth at which claims about [output roots][g-output-root] can occur, and below it, execution trace bisection occurs. Thus, the Game Tree contains $2^{d+1}-1$ positions, where $d$ is the `MAX_GAME_DEPTH` (unless $d=0$, in which case there's only 1 position). The full game tree, with a layer of the tree allocated to output bisection, and sub-trees after an arbitrary split depth, looks like: ![ob-tree](/static/assets/ob-tree.png) #### Position A position represents the location of a claim in the Game Tree. This is represented by a "generalized index" (or **gindex**) where the high-order bit is the level in the tree and the remaining bits is a unique bit pattern, allowing a unique identifier for each node in the tree. The **gindex** of a position $n$ can be calculated as $2^{d(n)} + idx(n)$, where: * $d(n)$ is a function returning the depth of the position in the Game Tree * $idx(n)$ is a function returning the index of the position at its depth (starting from the left). Positions at the deepest level of the game tree correspond to indices in the execution trace, whereas claims at the split depth represent single L2 blocks' [output roots][g-output-root]. Positions higher up the game tree also cover the deepest, right-most positions relative to the current position. We refer to this coverage as the **trace index** of a Position. > This means claims commit to an execution trace that terminates at the same index as their Position's trace index. > That is, for a given trace index $n$, its state witness hash corresponds to the $S\_n$ th state in the trace. Note that there can be multiple positions covering the same *trace index*. #### MAX\_CLOCK\_DURATION This is an immutable, preset to a FDG implementation, representing the maximum amount of time that may accumulate on a team's [chess clock](#game-clock). #### CLOCK\_EXTENSION This is an immutable, preset to a FDG implementation, representing the flat credit that is given to a team's clock if their clock has less than `CLOCK_EXTENSION` seconds remaining. #### Freeloader Claims Due to the subgame resolution logic, there are certain moves which result in the correct final resolution of the game, but do not pay out bonds to the correct parties. An example of this is as follows: 1. Alice creates a dispute game with an honest root claim. 2. Bob counters the honest root with a correct claim at the implied L2 block number. 3. Alice performs a defense move against Bob's counter, as the divergence exists later in Bob's view of the chain state. 4. Bob attacks his own claim. Bob's attack against his own claim *is* a counter to a bad claim, but with the incorrect pivot direction. If left untouched, because it exists at a position further left than Alice's, he will reclaim his own bond upon resolution. Because of this, the honest challenger must always counter freeloader claims for incentive compatibility to be preserved. Critically, freeloader claims, if left untouched, do not influence incorrect resolution of the game globally. ### Core Game Mechanics This section specifies the core game mechanics of the FDG. The full FDG mechanics includes a [specification for Bonds](bond-incentives.md). Readers should understand basic game mechanics before reading up on the Bond specification. #### Actors The game involves two types of participants (or Players): **Challengers** and **Defenders**. These players are grouped into separate teams, each employing distinct strategies to interact with the game. Team members share a common goal regarding the game's outcome. Players interact with the game primarily through *moves*. #### Moves A Move is a challenge against an existing claim and must include an alternate claim asserting a different trace. Moves can either be attacks or defenses and serve to update to DAG by adding nodes and edges targeting the disputed claim. Moves within the fault dispute game can claim two separate values: [output roots][g-output-root] and execution trace commitments. At and above the `SPLIT_DEPTH`, claims correspond to output roots, while below the split depth, they correspond to execution trace commitments. Initially, claims added to the DAG are *uncontested* (i.e. not **countered**). Once a move targets a claim, that claim is considered countered. The status of a claim — whether it's countered or not — helps determine its validity and, ultimately, the game's winner. ##### Attack A logical move made when a claim is disagreed with. A claim at the relative attack position to a node, `n`, in the Game Tree commits to half of the trace of the `n`’s claim. The attack position relative to a node can be calculated by multiplying its gindex by 2. To illustrate this, here's a Game Tree highlighting an attack on a Claim positioned at 6. ![Attacking node 6](/static/assets/attack.png) Attacking the node at 6 moves creates a new claim positioned at 12. ##### Defend The logical move against a claim when you agree with both it and its parent. A defense at the relative position to a node, `n`, in the Game Tree commits to the first half of n + 1’s trace range. ![Defend at 4](/static/assets/defend.png) Note that because of this, some nodes may never exist within the Game Tree. However, they're not necessary as these nodes have complimentary, valid positions with the same trace index within the tree. For example, a Position with gindex 5 has the same trace index as another Position with gindex 2. We can verify that all trace indices have valid moves within the game: ![Game Tree Showing All Valid Move Positions](/static/assets/valid-moves.png) There may be multiple claims at the same position, so long as their state witness hashes are unique. Each move adds new claims to the Game Tree at strictly increasing depth. Once a claim is at `MAX_GAME_DEPTH`, the only way to dispute such claims is to **step**. #### L2 Block Number Challenge This is a special type of action, made by the Challenger, to counter a root claim. Given an output root preimage and its corresponding RLP-encoded L2 block header, the L2 block number can be verified. This process ensures the integrity and authenticity of an L2 block number. The procedure for this verification involves three steps: checking the output root preimage, validating the block hash preimage, and extracting the block number from the RLP-encoded header. By comparing the challenger-supplied preimages and the extracted block number against their claimed values, the consistency of the L2 block number with the one in the provided header can be confirmed, detecting any discrepancies. Root claims made with an invalid L2 block number can be disputed through a special challenge. This challenge is validated in the FDG contract using the aforementioned procedure. However, it is crucial to note that this challenge can only be issued against the root claim, as it's the only entity making explicit claims on the L2 block number. A successful challenge effectively disputes the root claim once its subgame is resolved. #### Step At `MAX_GAME_DEPTH`, the position of claims correspond to indices of an execution trace. It's at this point that the FDG is able to query the VM to determine the validity of claims, by checking the states they're committing to. This is done by applying the VM's STF to the state a claim commits to. If the STF post-state does not match the claimed state, the challenge succeeds. ```solidity /// @notice Perform an instruction step via an on-chain fault proof processor. /// @dev This function should point to a fault proof processor in order to execute /// a step in the fault proof program on-chain. The interface of the fault proof /// processor contract should adhere to the `IBigStepper` interface. /// @param _claimIndex The index of the challenged claim within `claimData`. /// @param _isAttack Whether or not the step is an attack or a defense. /// @param _stateData The stateData of the step is the preimage of the claim at the given /// prestate, which is at `_stateIndex` if the move is an attack and `_claimIndex` if /// the move is a defense. If the step is an attack on the first instruction, it is /// the absolute prestate of the fault proof VM. /// @param _proof Proof to access memory nodes in the VM's merkle state tree. function step(uint256 _claimIndex, bool _isAttack, bytes calldata _stateData, bytes calldata _proof) external; ``` #### Step Types Similar to moves, there are two ways to step on a claim; attack or defend. These determine the pre-state input to the VM STF and the expected output. * **Attack Step** - Challenges a claim by providing a pre-state, proving an invalid state transition. It uses the previous state in the execution trace as input and expects the disputed claim's state as output. There must exist a claim in the DAG that commits to the input. * **Defense Step** - Challenges a claim by proving it was an invalid attack, thereby defending the disputed ancestor's claim. It uses the disputed claim's state as input and expects the next state in the execution trace as output. There must exist a claim in the DAG that commits to the expected output. The FDG step handles the inputs to the VM and asserts the expected output. A step that successfully proves an invalid post-state (when attacking) or pre-state (when defending) is a successful counter against the disputed claim. Players interface with `step` by providing an indicator of attack and state data (including any proofs) that corresponds to the expected pre/post state (depending on whether it's an attack or defend). The FDG will assert that an existing claim commits to the state data provided by players. #### PreimageOracle Interaction Certain steps (VM state transitions) require external data to be available by the `PreimageOracle`. To ensure a successful state transition, players should provide this data in advance. The FDG provides the following interface to manage data loaded to the `PreimageOracle`: ```solidity /// @notice Posts the requested local data to the VM's `PreimageOracle`. /// @param _ident The local identifier of the data to post. /// @param _execLeafIdx The index of the leaf claim in an execution subgame that requires the local data for a step. /// @param _partOffset The offset of the data to post. function addLocalData(uint256 _ident, uint256 _execLeafIdx, uint256 _partOffset) external; ``` The `addLocalData` function loads local data into the VM's `PreimageOracle`. This data consists of bootstrap data for the program. There are multiple sets of local preimage keys that belong to the `FaultDisputeGame` contract due to the ability for players to bisect to any block $n \rightarrow n + 1$ state transition since the configured anchor state, the `_execLeafIdx` parameter enables a search for the starting / disputed outputs to be performed such that the contract can write to and reference unique local keys in the `PreimageOracle` for each of these $n \rightarrow n + 1$ transitions. | Identifier | Description | | ---------- | ------------------------------------------------------ | | `1` | Parent L1 head hash at the time of the proposal | | `2` | Starting output root hash (commits to block # `n`) | | `3` | Disputed output root hash (commits to block # `n + 1`) | | `4` | Disputed L2 block number (block # `n + 1`) | | `5` | L2 Chain ID | For global `keccak256` preimages, there are two routes for players to submit: 1. Small preimages atomically. 2. Large preimages via streaming. Global `keccak256` preimages are non-context specific and can be submitted directly to the `PreimageOracle` via the `loadKeccak256PreimagePart` function, which takes the part offset as well as the full preimage. In the event that the preimage is too large to be submitted through calldata in a single block, challengers must resort to the streaming option. **Large Preimage Proposals** Large preimage proposals allow for submitters to stream in a large preimage over multiple transactions, along-side commitments to the intermediate state of the `keccak256` function after absorbing/permuting the $1088$ bit block. This data is progressively merkleized on-chain as it is streamed in, with each leaf constructed as follows: ```solidity /// @notice Returns a leaf hash to add to a preimage proposal merkle tree. /// @param input A single 136 byte chunk of the input. /// @param blockIndex The index of the block that `input` corresponds to in the full preimage's absorption. /// @param stateCommitment The hash of the full 5x5 state matrix *after* absorbing and permuting `input`. function hashLeaf( bytes memory input, uint256 blockIndex, bytes32 stateCommitment ) internal view returns (bytes32 leaf) { require(input.length == 136, "input must be exactly the size of the keccak256 rate"); leaf = keccak256(abi.encodePacked(input, blockIndex, stateCommitment)); } ``` Once the full preimage and all intermediate state commitments have been posted, the large preimage proposal enters a challenge period. During this time, a challenger can reconstruct the merkle tree that was progressively built on-chain locally by scanning the block bodies that contain the proposer's leaf preimages. If they detect that a commitment to the intermediate state of the hash function is incorrect at any step, they may perform a single-step dispute for the proposal in the `PreimageOracle`. This involves: 1. Creating a merkle proof for the agreed upon prestate leaf (not necessary if the invalid leaf is the first one, the setup state of the matrix is constant.) within the proposal's merkle root. 2. Creating a merkle proof for the disputed post state leaf within the proposal's merkle root. 3. Computing the state matrix at the agreed upon prestate (not necessary if the invalid leaf is the first one, the setup state of the matrix is constant.) The challenger then submits this data to the `PreimageOracle`, where the post state leaf's claimed input is absorbed into the pre state leaf's state matrix and the SHA3 permutation is executed on-chain. After that, the resulting state matrix is hashed and compared with the proposer's claim in the post state leaf. If the hash does not match, the proposal is marked as challenged, and it may not be finalized. If, after the challenge period is concluded, a proposal has no challenges, it may be finalized and the preimage part may be placed into the authorized mappings for the FPVM to read. #### Team Dynamics Challengers seek to dispute the root claim, while Defenders aim to support it. Both types of actors will move accordingly to support their team. For Challengers, this means attacking the root claim and disputing claims positioned at even depths in the Game Tree. Defenders do the opposite by disputing claims positioned at odd depths. Players on either team are motivated to support the actions of their teammates. This involves countering disputes against claims made by their team (assuming these claims are honest). Uncontested claims are likely to result in a loss, as explained later under [Resolution](#resolution). #### Game Clock Every claim in the game has a Clock. A claim inherits the clock of its grandparent claim in the DAG (and so on). Akin to a chess clock, it keeps track of the total time each team takes to make moves, preventing delays. Making a move resumes the clock for the disputed claim and pauses it for the newly added one. If a move is performed, where the potential grandchild's clock has less time than `CLOCK_EXTENSION` seconds remaining, the potential grandchild's clock is granted exactly `CLOCK_EXTENSION` seconds remaining. This is to combat the situation where a challenger must inherit a malicious party's clock when countering a [freeloader claim](#freeloader-claims), in order to preserve incentive compatibility for the honest party. As the extension only applies to the potential grandchild's clock, the max possible extension for the game is bounded, and scales with the `MAX_GAME_DEPTH`. If the potential grandchild is an execution trace bisection root claim and their clock has less than `CLOCK_EXTENSION` seconds remaining, exactly `CLOCK_EXTENSION * 2` seconds are allocated for the potential grandchild. This extra time is allotted to allow for completion of the off-chain FPVM run to generate the initial instruction trace. A move against a particular claim is no longer possible once the parent of the disputed claim's Clock has accumulated `MAX_CLOCK_DURATION` seconds. By which point, the claim's clock has *expired*. #### Resolution Resolving the FDG determines which team won the game. To do this, we use the internal sub game structure. Each claim within the game is the root of its own sub game. These subgames are modeled as nested DAGs, each with a max depth of 1. In order for a claim to be considered countered, only one of its children must be uncountered. Subgames can also not be resolved until all of their children, which are subgames themselves, have been resolved and the potential opponent's chess clock has run out. To determine if the potential opponent's chess clock has ran out, and therefore no more moves against the subgame are possible, the duration elapsed on the subgame root's parent clock is added to the difference between the current time and the timestamp of the subgame root's creation. Because each claim is the root of its own sub-game, truth percolates upwards towards the root claim by resolving each individual sub-game bottom-up. In a game like the one below, we can resolve up from the deepest subgames. Here, we'd resolve `b0` to uncountered and `a0` to countered by walking up from their deepest children, and once all children of the root game are recursively resolved, we can resolve the root to countered due to `b0` remaining uncountered. ![Subgame resolution example](https://github.com/ethereum-optimism/optimism/assets/8406232/d2b708a0-539e-439d-96bd-c2f66f3a45f8) Another example is this game, which has a slightly different structure. Here, the root claim will also be countered due to `b0` remaining uncountered. ![Subgame resolution variant](https://github.com/ethereum-optimism/optimism/assets/8406232/9b20ba8d-0b64-47b3-9962-5533f7eb4ef7) Given these rules, players are motivated to move quickly to challenge all dishonest claims. Each move bisects the historical state of L2 and eventually, `MAX_GAME_DEPTH` is reached where disputes can be settled conclusively. Dishonest players are disincentivized to participate, via backwards induction, as an invalid claim won't remain uncontested. Further incentives can be added to the game by requiring claims to be bonded, while rewarding game winners using the bonds of dishonest claims. ##### Resolving the L2 Block Number Challenge The resolution of an L2 block number challenge occurs in the same manner as subgame resolution, with one caveat; the L2 block number challenger, if it exist, must be the winner of a root subgame. Thus, no moves against the root, including uncontested ones, can win a root subgame that has an L2 block number challenge. #### Finalization Once the game is resolved, it must wait for the `disputeGameFinalityDelaySeconds` on the `OptimismPortal` to pass before it can be finalized, after which bonds can be distributed via the process outlined in [Bond Incentives: Game Finalization](bond-incentives.md#game-finalization). ## Honest Challenger (Fault Dispute Game) ### Overview The honest challenger is an agent interacting in the [Fault Dispute Game](fault-dispute-game.md) that supports honest claims and disputes false claims. An honest challenger strives to ensure a correct, truthful, game resolution. The honest challenger is also *rational* as any deviation from its behavior will result in negative outcomes. This document specifies the expected behavior of an honest challenger. The Honest Challenger has two primary duties: 1. Support valid root claims in Fault Dispute Games. 2. Dispute invalid root claims in Fault Dispute Games. The honest challenger polls the `DisputeGameFactory` contract for new and on-going Fault Dispute Games. For verifying the legitimacy of claims, it relies on a synced, trusted rollup node as well as a trace provider (ex: [Cannon](../cannon-fault-proof-vm.md)). The trace provider must be configured with the [ABSOLUTE\_PRESTATE](fault-dispute-game.md#execution-trace) of the game being interacted with to generate the traces needed to make truthful claims. ### Invariants To ensure an accurate and incentive compatible fault dispute system, the honest challenger behavior must preserve three invariants for any game: 1. The game resolves as `DefenderWins` if the root claim is correct and `ChallengerWins` if the root claim is incorrect 2. The honest challenger is refunded the bond for every claim it posts and paid the bond of the parent of that claim 3. The honest challenger never counters its own claim ### Fault Dispute Game Responses The honest challenger determines which claims to counter by iterating through the claims in the order they are stored in the contract. This ordering ensures that a claim's ancestors are processed prior to the claim itself. For each claim, the honest challenger determines and tracks the set of honest responses to all claims, regardless of whether that response already exists in the full game state. The root claim is considered to be an honest claim if and only if it has a [state witness Hash](fault-dispute-game.md#claims) that agrees with the honest challenger's state witness hash for the root claim. The honest challenger should counter a claim if and only if: 1. The claim is a child of a claim in the set of honest responses 2. The set of honest responses, contains a sibling to the claim with a trace index greater than or equal to the claim's trace index Note that this implies the honest challenger never counters its own claim, since there is at most one honest counter to each claim, so an honest claim never has an honest sibling. #### Moves To respond to a claim with a depth in the range of `[1, MAX_DEPTH]`, the honest challenger determines if the claim has a valid commitment. If the state witness hash matches the honest challenger's at the same trace index, then we disagree with the claim's stance by move to [defend](fault-dispute-game.md#defend). Otherwise, the claim is [attacked](fault-dispute-game.md#attack). The claim that would be added as a result of the move is added to the set of honest moves being tracked. If the resulting claim does not already exist in the full game state, the challenger issue the move by calling the `FaultDisputeGame` contract. #### Steps At the max depth of the game, claims represent commitments to the state of the fault proof VM at a single instruction step interval. Because the game can no longer bisect further, when the honest challenger counters these claims, the only option for an honest challenger is to execute a VM step on-chain to disprove the claim at `MAX_GAME_DEPTH`. If the `counteredBy` of the claim being countered is non-zero, the claim has already been countered and the honest challenger does not perform any action. Otherwise, similar to the above section, the honest challenger will issue an [attack step](fault-dispute-game.md#step-types) when in response to such claims with invalid state witness commitments. Otherwise, it issues a *defense step*. #### Timeliness The honest challenger responds to claims as soon as possible to avoid the clock of its counter-claim from expiring. ### Resolution When the [chess clock](fault-dispute-game.md#game-clock) of a [subgame root](fault-dispute-game.md#resolution) has run out, the subgame can be resolved. The honest challenger should resolve all subgames in bottom-up order, until the subgame rooted at the game root is resolved. The honest challenger accomplishes this by calling the `resolveClaim` function on the `FaultDisputeGame` contract. Once the root claim's subgame is resolved, the challenger then finally calls the `resolve` function to resolve the entire game. The `FaultDisputeGame` does not put a time cap on resolution - because of the liveness assumption on honest challengers and the bonds attached to the claims they’ve countered, challengers are economically incentivized to resolve the game promptly to capture the bonds. ## Stage One Decentralization [g-l2-proposal]: ../../../reference/glossary.md#l2-output-root-proposals This section of the specification contains the system design for stage one decentralization, with a fault-proof system for [output proposals][g-l2-proposal] and the integration with the `OptimismPortal` contract, which is the arbiter of withdrawals on L1. ## OptimismPortal ### Overview The `OptimismPortal` contract is the primary interface for deposits and withdrawals between the L1 and L2 chains within Base. The `OptimismPortal` contract allows users to create "deposit transactions" on the L1 chain that are automatically executed on the L2 chain within a bounded amount of time. Additionally, the `OptimismPortal` contract allows users to execute withdrawal transactions by proving that such a withdrawal was initiated on the L2 chain. The `OptimismPortal` verifies the correctness of these withdrawal transactions against Output Roots that have been declared valid by the L1 Fault Proof system. ### Definitions #### Proof Maturity Delay The **Proof Maturity Delay** is the minimum amount of time that a withdrawal must be a [Proven Withdrawal](#proven-withdrawal) before it can be finalized. #### Proven Withdrawal A **Proven Withdrawal** is a withdrawal transaction that has been proven against some Output Root by a user. Users can prove withdrawals against any Dispute Game contract that meets the following conditions: * The game is a [Registered Game](anchor-state-registry.md#registered-game) * The game is not a [Retired Game](anchor-state-registry.md#retired-game) * The game has a game type that matches the current [Respected Game Type](anchor-state-registry.md#respected-game-type) * The game has not resolved in favor of the Challenger Notably, the `OptimismPortal` allows users to prove withdrawals against games that are currently in progress (games that are not [Resolved Games](anchor-state-registry.md#resolved-game)). Users may re-prove a withdrawal at any time. User withdrawals are stored on a per-user basis such that re-proving a withdrawal cannot cause the timer for [finalizing a withdrawal](#finalized-withdrawal) to be reset for another user. #### Finalized Withdrawal A **Finalized Withdrawal** is a withdrawal transaction that was previously a Proven Withdrawal and meets a number of additional conditions that allow the withdrawal to be executed. Users can finalize a withdrawal if they have previously proven the withdrawal and their withdrawal meets the following conditions: * Withdrawal is a [Proven Withdrawal](#proven-withdrawal) * Withdrawal was proven at least [Proof Maturity Delay](#proof-maturity-delay) seconds ago * Withdrawal was proven against a game with a [Valid Claim](anchor-state-registry.md#valid-claim) * Withdrawal was not previously finalized #### Valid Withdrawal A **Valid Withdrawal** is a withdrawal transaction that was correctly executed on the L2 system as would be reported by a perfect oracle for the query. #### Invalid Withdrawal An **Invalid Withdrawal** is any withdrawal that is not a [Valid Withdrawal](#valid-withdrawal). #### L2 Withdrawal Sender The **L2 Withdrawal Sender** is the address of the account that triggered a given withdrawal transaction on L2. The `OptimismPortal` is expected to expose a variable that includes this value when [finalizing](#finalized-withdrawal) a withdrawal. #### Receive Default Gas Limit The receive default gas limit is the gas limit provided for simple ETH deposits that are triggered when a user sends ETH to the `OptimismPortal` via the `receive` function. This gas limit is currently set to a value of 100000 gas. #### Minimum Gas Limit The minimum gas limit is the minimum amount of L2 gas that must be purchased when creating a deposit transaction. This limit increases linearly based on the size of the calldata to prevent users from creating L2 resource usage without paying for it. The minimum gas limit is calculated as: calldata\_byte\_count \* 40 + 21000. #### Unsafe Target An **Unsafe Target** is a target address that is considered unsafe for withdrawal or deposit transactions. Unsafe targets include the OptimismPortal contract itself and the ETHLockbox contract. Targeting these addresses could potentially create attack vectors. #### Block Output A **Block Output**, commonly called an **Output**, is a data structure that wraps the key hash elements of a given L2 block. The structure of the Block Output is versioned (32 bytes). The current Block Output version is `0x0000000000000000000000000000000000000000000000000000000000000000` (V0). A V0 Block Output has the following structure: ```solidity struct BlockOutput { bytes32 version; bytes32 stateRoot; bytes32 messagePasserStorageRoot; bytes32 blockHash; } ``` Where: * `version` is a version identifier that describes the structure of the Output Root * `stateRoot` is the state root of the L2 block this Output Root corresponds to * `messagePasserStorageRoot` is the storage root of the `L2ToL1MessagePasser` contract at the L2 block this Output Root corresponds to * `blockHash` is the block hash of the L2 block this Output Root corresponds to #### Output Root An **Output Root** is a commitment to a [Block Output](#block-output). A detailed description of this commitment can be found [on this page](../proposer.md#l2-output-commitment-construction). #### Super Output A **Super Output** is a data structure that commits all of the [Block Outputs](#block-output) for all chains within the Superchain Interop Set at a given timestamp. A Super Output can also commit to a single Block Output to maintain compatibility with chains outside of the Interop Set. The structure of the Super Output is versioned (1 byte). The current version is `0x01` (V1). A V1 Super Output has the following structure: ```solidity struct OutputRootWithChainId { uint256 chainId; bytes32 root; } struct SuperOutput { uint64 timestamp; OutputRootWithChainid[] outputRoots; } ``` The output root for each chain in the super root MUST be for the block with a timestamp where `Time_B` is strictly greater than `Time_S - BlockTime` and less than or equal to `Time_S`, where `Time_S` is the super root timestamp, `BlockTime` is the chain block time, and `Time_B` is the block timestamp. That is, the output root must be from the last possible block at or before the super root timestamp. The output roots in the super root MUST be sorted by chain ID ascending. #### Super Root A **Super Root** is a commitment to a [Super Output](#super-output), computed as: ```solidity keccak256(encodeSuperRoot(SuperRoot)) ``` Where `encodeSuperRoot` for the V1 Super Output is: ```solidity function encodeSuperRoot(SuperRoot memory root) returns (bytes) { require(root.outputRoots.length > 0); // Super Root must have at least one Output Root. return concat( 0x01, // Super Root version byte root.timestamp, [ concat(outputRoot.chainId, outputRoot.root) for outputRoot in root.outputRoots ] ); } ``` ### Assumptions #### aOP-001: Dispute Game contracts properly report important properties We assume that the `FaultDisputeGame` and `PermissionedDisputeGame` contracts properly and faithfully report the following properties: * Game type * L2 block number * Root claim value * Game extra data * Creation timestamp * Resolution timestamp * Resolution result * Whether the game was the respected game type at creation We also specifically assume that the game creation timestamp and the resolution timestamp are not set to values in the future. ##### Mitigations * Existing audit on the `FaultDisputeGame` contract * Integration testing #### aOP-002: DisputeGameFactory properly reports its created games We assume that the `DisputeGameFactory` contract properly and faithfully reports the games it has created. ##### Mitigations * Existing audit on the `DisputeGameFactory` contract * Integration testing #### aOP-003: Incorrectly resolving games will be invalidated before they have Valid Claims We assume that any games that are resolved incorrectly will be invalidated either by [blacklisting](anchor-state-registry.md#blacklisted-game) or by [retirement](anchor-state-registry.md#retired-game) BEFORE they are considered to have [Valid Claims](anchor-state-registry.md#valid-claim). Proper Games that resolve in favor the Defender will be considered to have Valid Claims after the [Dispute Game Finality Delay](anchor-state-registry.md#dispute-game-finality-delay-airgap) has elapsed UNLESS the Pause Mechanism is active. Therefore, in the absence of the Pause Mechanism, parties responsible for game invalidation have exactly the Dispute Game Finality Delay to invalidate a withdrawal after it resolves incorrectly. If the Pause Mechanism is active, then any incorrectly resolving games must be invalidated before the pause is deactivated. ##### Mitigations * Stakeholder incentives / processes * Incident response plan * Monitoring ### Dependencies * [iASR-001](anchor-state-registry.md#iasr-001-games-are-represented-as-proper-games-accurately) * [iASR-002](anchor-state-registry.md#iasr-002-all-valid-claims-are-truly-valid-claims) ### Invariants #### iOP-001: Invalid Withdrawals can never be finalized We require that [Invalid Withdrawals](#invalid-withdrawal) can never be [finalized](#finalized-withdrawal) for any reason. ##### Impact **Severity: Critical** If this invariant is broken, any number of arbitrarily bad outcomes could happen. Most obviously, we would expect all bridge systems relying on the `OptimismPortal` to be immediately compromised. #### iOP-002: Valid Withdrawals can always be finalized in bounded time We require that [Valid Withdrawals](#valid-withdrawal) can always be [finalized](#finalized-withdrawal) within some reasonable, bounded amount of time. ##### Impact **Severity: Critical** If this invariant is broken, we would expect that users are unable to withdraw bridged assets. We see this as a critical system risk. ### Function Specification #### constructor * MUST set the value of the [Proof Maturity Delay](#proof-maturity-delay). #### initialize * MUST only be callable by the ProxyAdmin or its owner. * MUST set the value of the `SystemConfig` contract. * MUST set the value of the `AnchorStateRegistry` contract. * MUST assert that the ETHLockbox state is valid based on the feature flag. * MUST set the value of the [L2 Withdrawal Sender](#l2-withdrawal-sender) variable to the default value if the value is not set already. * MUST initialize the resource metering configuration. #### paused Returns the current state of the `SystemConfig.paused()` function. #### guardian Returns the address of the Guardian as per `SystemConfig.guardian()`. #### ethLockbox Returns the address of the ETHLockbox configured for this contract. If the contract has not been configured for this OptimismPortal, this function will return `address(0)`. #### proofMaturityDelaySeconds Returns the value of the [Proof Maturity Delay](#proof-maturity-delay). #### disputeGameFactory Returns the DisputeGameFactory contract from the AnchorStateRegistry contract. #### disputeGameFinalityDelaySeconds **Legacy Function** Returns the value of the [Dispute Game Finality Delay](anchor-state-registry.md#dispute-game-finality-delay-airgap) as per a call to `AnchorStateRegistry.disputeGameFinalityDelaySeconds()`. #### respectedGameType **Legacy Function** Returns the value of the current [Respected Game Type](anchor-state-registry.md#respected-game-type) as per a call to `AnchorStateRegistry.respectedGameType`. #### respectedGameTypeUpdatedAt **Legacy Function** Returns the value of the current [Retirement Timestamp](anchor-state-registry.md#retirement-timestamp) as per a call to \`AnchorStateRegistry.retirementTimestamp. #### l2Sender Returns the address of the [L2 Withdrawal Sender](#l2-withdrawal-sender). If the `OptimismPortal` has not been initialized then this value will be `address(0)` and should not be used. If the `OptimismPortal` is not currently executing an withdrawal transaction then this value will be `0x000000000000000000000000000000000000dEaD` and should not be used. #### proveWithdrawalTransaction Allows a user to [prove](#proven-withdrawal) a withdrawal transaction. * MUST revert if the system is paused. * MUST revert if the withdrawal target is an [Unsafe Target](#unsafe-target). * MUST revert if the withdrawal is being proven against a game that is not a [Proper Game](anchor-state-registry.md#proper-game). * MUST revert if the withdrawal is being proven against a game that is not a [Respected Game](anchor-state-registry.md#respected-game). * MUST revert if the withdrawal is being proven against a game that has resolved in favor of the Challenger. * MUST revert if the current timestamp is less than or equal to the dispute game's creation timestamp. * MUST revert if the proof provided by the user of the preimage of the Output Root that the dispute game argues about is invalid. This proof is verified by hashing the user-provided preimage and comparing them to the root claim of the referenced dispute game. * MUST revert if the provided merkle trie proof that the withdrawal was included within the root claim of the provided dispute game is invalid. * MUST otherwise store a record of the withdrawal proof that includes the hash of the proven withdrawal, the address of the game against which it was proven, and the block timestamp at which the proof transaction was submitted. * MUST add the proof submitter to the list of submitters for this withdrawal hash. * MUST emit a `WithdrawalProven` event with the withdrawal hash, sender, and target. * MUST emit a `WithdrawalProvenExtension1` event with the withdrawal hash and proof submitter address. #### checkWithdrawal Checks that a withdrawal transaction can be [finalized](#finalized-withdrawal). * MUST revert if the withdrawal being finalized has already been finalized. * MUST revert if the withdrawal being finalized has not been proven. * MUST revert if the withdrawal was proven at a timestamp less than or equal to the creation timestamp of the dispute game it was proven against, which would signal an unexpected proving bug. Note that prevents withdrawals from being proven in the same block that a dispute game is created. * MUST revert if the withdrawal being finalized has been proven less than [Proof Maturity Delay](#proof-maturity-delay) seconds ago. * MUST revert if the withdrawal being finalized was proven against a game that does not have a [Valid Claim](anchor-state-registry.md#valid-claim). #### finalizeWithdrawalTransaction Allows a user to [finalize](#finalized-withdrawal) a withdrawal transaction. * MUST delegate to `finalizeWithdrawalTransactionExternalProof` with `msg.sender` as the proof submitter. #### donateETH Allows any address to donate ETH to the contract without triggering a deposit to L2. * MUST accept ETH payments via the payable modifier. * MUST not perform any state-changing operations. * MUST not trigger a deposit transaction to L2. #### finalizeWithdrawalTransactionExternalProof Allows a user to [finalize](#finalized-withdrawal) a withdrawal transaction using a proof submitted by another address. * MUST revert if the system is paused. * MUST revert if the function is called while a previous withdrawal is being executed. * MUST revert if the withdrawal target is an [Unsafe Target](#unsafe-target). * MUST revert if the withdrawal being finalized does not pass `checkWithdrawal`. * MUST mark the withdrawal as finalized. * MUST unlock ETH from the ETHLockbox if the withdrawal includes an ETH value AND the OptimismPortal has an ETHLockbox configured AND the ETHLockbox system feature is active. * MUST set the L2 Withdrawal Sender variable correctly. * MUST execute the withdrawal transaction by executing a contract call to the target address with the data and ETH value specified within the withdrawal using AT LEAST the minimum amount of gas specified by the withdrawal. * MUST unset the L2 Withdrawal Sender after the withdrawal call. * MUST emit a `WithdrawalFinalized` event with the withdrawal hash and success status. * MUST lock any unused ETH back into the ETHLockbox if the call to the target address fails AND the OptimismPortal has an ETHLockbox configured AND the ETHLockbox system feature is active. * MUST revert if the withdrawal call fails and the transaction origin is the estimation address, to help determine exact gas costs. #### numProofSubmitters Returns the number of proof submitters for a given withdrawal hash. * MUST return the length of the proofSubmitters array for the specified withdrawal hash. * MUST NOT change state. #### receive Accepts ETH value and creates a deposit transaction to the sender's address on L2. * MUST be payable and accept ETH. * MUST create a deposit transaction where the sender and target are the same address, refer to [depositTransaction](#deposittransaction) for full specification of expected behavior. * MUST use the [receive default gas limit](#receive-default-gas-limit) as the gas limit. * MUST set contract creation flag to false. * MUST use empty data for the deposit. * MUST transform the sender address to its alias if the caller is a contract. * MUST emit a TransactionDeposited event with the appropriate parameters. #### minimumGasLimit Computes the minimum gas limit for a deposit transaction based on calldata size. * MUST calculate the minimum gas limit using the formula: calldata\_byte\_count \* 40 + 21000. #### superchainConfig Returns the `SuperchainConfig` contract address. * MUST return the address of the `SuperchainConfig` contract stored in the `SystemConfig` contract that was set during initialization. #### disputeGameBlacklist **Legacy Function** Checks if a dispute game is blacklisted. * MUST delegate to the blacklist of the `AnchorStateRegistry` contract that was set during initialization. * MUST return whether the given dispute game is blacklisted. #### depositTransaction Accepts deposits of ETH and data, and emits a TransactionDeposited event for use in deriving deposit transactions. Note that if a deposit is made by a contract, its address will be aliased when retrieved using `tx.origin` or `msg.sender`. Consider using the CrossDomainMessenger contracts for a simpler developer experience. * MUST lock any ETH value (msg.value) in the ETHLockbox contract if the OptimismPortal has an ETHLockbox configured AND the ETHLockbox system feature is active. * MUST revert if the target address is not address(0) for contract creations. * MUST revert if the gas limit provided is below the [minimum gas limit](#minimum-gas-limit). * MUST revert if the calldata is too large (> 120,000 bytes). * MUST transform the sender address to its alias if the caller is a contract. * MUST apply resource metering to the gas limit parameter. * MUST emit a TransactionDeposited event with the from address, to address, deposit version, and opaque data. ## L2 Execution Engine This document outlines the modifications, configuration and usage of a L1 execution engine for L2. ### 1559 Parameters The execution engine must be able to take a per chain configuration which specifies the EIP-1559 Denominator and EIP-1559 elasticity. After Canyon it should also take a new value `EIP1559DenominatorCanyon` and use that as the denominator in the 1559 formula rather than the prior denominator. The formula for EIP-1559 is otherwise not modified. Starting with Holocene, the EIP-1559 parameters become [dynamically configurable](../../upgrades/holocene/exec-engine.md#dynamic-eip-1559-parameters). Starting with Jovian, a [configurable minimum base fee](../../upgrades/jovian/exec-engine.md#minimum-base-fee) is introduced. ### Extra Data Before Holocene, the genesis block may contain an arbitrary `extraData` value whereas all normal blocks must have an **empty** `extraData` field. With Holocene, the `extraData` field [encodes the EIP-1559 parameters](../../upgrades/holocene/exec-engine.md#dynamic-eip-1559-parameters). With Jovian, the `extraData` encoding is extended to [include `minBaseFee`](../../upgrades/jovian/exec-engine.md#minimum-base-fee). ### Deposited transaction processing The Engine interfaces abstract away transaction types with [EIP-2718][eip-2718]. To support rollup functionality, processing of a new Deposit [`TransactionType`][eip-2718-transactions] is implemented by the engine, see the [deposits specification][deposit-spec]. This type of transaction can mint L2 ETH, run EVM, and introduce L1 information to enshrined contracts in the execution state. [deposit-spec]: ../bridging/deposits.md #### Deposited transaction boundaries Transactions cannot be blindly trusted, trust is established through authentication. Unlike other transaction types deposits are not authenticated by a signature: the rollup node authenticates them, outside of the engine. To process deposited transactions safely, the deposits MUST be authenticated first: * Ingest directly through trusted Engine API * Part of sync towards a trusted block hash (trusted through previous Engine API instruction) Deposited transactions MUST never be consumed from the transaction pool. *The transaction pool can be disabled in a deposits-only rollup* ### Fees Sequenced transactions (i.e. not applicable to deposits) are charged with 3 types of fees: priority fees, base fees, and L1-cost fees. #### Fee Vaults The three types of fees are collected in 3 distinct L2 fee-vault deployments for accounting purposes: fee payments are not registered as internal EVM calls, and thus distinguished better this way. These are hardcoded addresses, pointing at pre-deployed proxy contracts. The proxies are backed by vault contract deployments, based on `FeeVault`, to route vault funds to L1 securely. | Vault Name | Predeploy | | ------------------- | ---------------------------------------------------------- | | Sequencer Fee Vault | [`SequencerFeeVault`](evm/predeploys.md#sequencerfeevault) | | Base Fee Vault | [`BaseFeeVault`](evm/predeploys.md#basefeevault) | | L1 Fee Vault | [`L1FeeVault`](evm/predeploys.md#l1feevault) | #### Priority fees (Sequencer Fee Vault) Priority fees follow the [eip-1559] specification, and are collected by the fee-recipient of the L2 block. The block fee-recipient (a.k.a. coinbase address) is set to the Sequencer Fee Vault address. #### Base fees (Base Fee Vault) Base fees largely follow the [eip-1559] specification, with the exception that base fees are not burned, but add up to the Base Fee Vault ETH account balance. #### L1-Cost fees (L1 Fee Vault) The protocol funds batch-submission of sequenced L2 transactions by charging L2 users an additional fee based on the estimated batch-submission costs. This fee is charged from the L2 transaction-sender ETH balance, and collected into the L1 Fee Vault. The exact L1 cost function to determine the L1-cost fee component of a L2 transaction depends on the upgrades that are active. ##### Pre-Ecotone Before Ecotone activation, L1 cost is calculated as: `(rollupDataGas + l1FeeOverhead) * l1BaseFee * l1FeeScalar / 1e6` (big-int computation, result in Wei and `uint256` range) Where: * `rollupDataGas` is determined from the *full* encoded transaction (standard EIP-2718 transaction encoding, including signature fields): * `rollupDataGas = zeroes * 4 + ones * 16` * `l1FeeOverhead` is the Gas Price Oracle `overhead` value. * `l1FeeScalar` is the Gas Price Oracle `scalar` value. * `l1BaseFee` is the L1 base fee of the latest L1 origin registered in the L2 chain. Note that the `rollupDataGas` uses the same byte cost accounting as defined in [eip-2028], except the full L2 transaction now counts towards the bytes charged in the L1 calldata. This behavior matches pre-Bedrock L1-cost estimation of L2 transactions. Compression, batching, and intrinsic gas costs of the batch transactions are accounted for by the protocol with the Gas Price Oracle `overhead` and `scalar` parameters. The Gas Price Oracle `l1FeeOverhead` and `l1FeeScalar`, as well as the `l1BaseFee` of the L1 origin, can be accessed in two interchangeable ways: * read from the deposited L1 attributes (`l1FeeOverhead`, `l1FeeScalar`, `basefee`) of the current L2 block * read from the L1 Block Info contract (`0x4200000000000000000000000000000000000015`) * using the respective solidity `uint256`-getter functions (`l1FeeOverhead`, `l1FeeScalar`, `basefee`) * using direct storage-reads: * L1 basefee as big-endian `uint256` in slot `1` * Overhead as big-endian `uint256` in slot `5` * Scalar as big-endian `uint256` in slot `6` ##### Ecotone L1-Cost fee changes (EIP-4844 DA) Ecotone allows posting batches via Blobs which are subject to a new fee market. To account for this feature, L1 cost is computed as: `(zeroes*4 + ones*16) * (16*l1BaseFee*l1BaseFeeScalar + l1BlobBaseFee*l1BlobBaseFeeScalar) / 16e6` Where: * the computation is an unlimited precision integer computation, with the result in Wei and having `uint256` range. * zeroes and ones are the count of zero and non-zero bytes respectively in the *full* encoded signed transaction. * `l1BaseFee` is the L1 base fee of the latest L1 origin registered in the L2 chain. * `l1BlobBaseFee` is the blob gas price, computed as described in [EIP-4844][4844-gas] from the header of the latest registered L1 origin block. Conceptually what the above function captures is the formula below, where `compressedTxSize = (zeroes*4 + ones*16) / 16` can be thought of as a rough approximation of how many bytes the transaction occupies in a compressed batch. `(compressedTxSize) * (16*l1BaseFee*lBaseFeeScalar + l1BlobBaseFee*l1BlobBaseFeeScalar) / 1e6` The precise cost function used by Ecotone at the top of this section preserves precision under integer arithmetic by postponing the inner division by 16 until the very end. [4844-gas]: https://github.com/ethereum/EIPs/blob/master/EIPS/eip-4844.md#gas-accounting The two base fee values and their respective scalars can be accessed in two interchangeable ways: * read from the deposited L1 attributes (`l1BaseFeeScalar`, `l1BlobBaseFeeScalar`, `basefee`, `blobBaseFee`) of the current L2 block * read from the L1 Block Info contract (`0x4200000000000000000000000000000000000015`) * using the respective solidity getter functions * using direct storage-reads: * basefee `uint256` in slot `1` * blobBaseFee `uint256` in slot `7` * l1BaseFeeScalar big-endian `uint32` slot `3` at offset `12` * l1BlobBaseFeeScalar big-endian `uint32` in slot `3` at offset `8` ### Engine API #### `engine_forkchoiceUpdatedV2` This updates which L2 blocks the engine considers to be canonical (`forkchoiceState` argument), and optionally initiates block production (`payloadAttributes` argument). Within the rollup, the types of forkchoice updates translate as: * `headBlockHash`: block hash of the head of the canonical chain. Labeled `"unsafe"` in user JSON-RPC. Nodes may apply L2 blocks out of band ahead of time, and then reorg when L1 data conflicts. * `safeBlockHash`: block hash of the canonical chain, derived from L1 data, unlikely to reorg. * `finalizedBlockHash`: irreversible block hash, matches lower boundary of the dispute period. To support rollup functionality, one backwards-compatible change is introduced to [`engine_forkchoiceUpdatedV2`][engine_forkchoiceUpdatedV2]: the extended `PayloadAttributesV2` ##### Extended PayloadAttributesV2 [`PayloadAttributesV2`][PayloadAttributesV2] is extended to: ```js PayloadAttributesV2: { timestamp: QUANTITY prevRandao: DATA (32 bytes) suggestedFeeRecipient: DATA (20 bytes) withdrawals: array of WithdrawalV1 transactions: array of DATA noTxPool: bool gasLimit: QUANTITY or null } ``` The type notation used here refers to the [HEX value encoding] used by the [Ethereum JSON-RPC API specification][JSON-RPC-API], as this structure will need to be sent over JSON-RPC. `array` refers to a JSON array. Each item of the `transactions` array is a byte list encoding a transaction: `TransactionType || TransactionPayload` or `LegacyTransaction`, as defined in [EIP-2718][eip-2718]. This is equivalent to the `transactions` field in [`ExecutionPayloadV2`][ExecutionPayloadV2] The `transactions` field is optional: * If empty or missing: no changes to engine behavior. The sequencers will (if enabled) build a block by consuming transactions from the transaction pool. * If present and non-empty: the payload MUST be produced starting with this exact list of transactions. The [rollup driver][rollup-driver] determines the transaction list based on deterministic L1 inputs. The `noTxPool` is optional as well, and extends the `transactions` meaning: * If `false`, the execution engine is free to pack additional transactions from external sources like the tx pool into the payload, after any of the `transactions`. This is the default behavior a L1 node implements. * If `true`, the execution engine must not change anything about the given list of `transactions`. If the `transactions` field is present, the engine must execute the transactions in order and return `STATUS_INVALID` if there is an error processing the transactions. It must return `STATUS_VALID` if all of the transactions could be executed without error. **Note**: The state transition rules have been modified such that deposits will never fail so if `engine_forkchoiceUpdatedV2` returns `STATUS_INVALID` it is because a batched transaction is invalid. The `gasLimit` is optional w\.r.t. compatibility with L1, but required when used as rollup. This field overrides the gas limit used during block-building. If not specified as rollup, a `STATUS_INVALID` is returned. [rollup-driver]: ../consensus/index.md #### `engine_forkchoiceUpdatedV3` See [`engine_forkchoiceUpdatedV2`](#engine_forkchoiceupdatedv2) for a description of the forkchoice updated method. `engine_forkchoiceUpdatedV3` **must only be called with Ecotone payload.** To support rollup functionality, one backwards-compatible change is introduced to [`engine_forkchoiceUpdatedV3`][engine_forkchoiceUpdatedV3]: the extended `PayloadAttributesV3` ##### Extended PayloadAttributesV3 [`PayloadAttributesV3`][PayloadAttributesV3] is extended to: ```js PayloadAttributesV3: { timestamp: QUANTITY prevRandao: DATA (32 bytes) suggestedFeeRecipient: DATA (20 bytes) withdrawals: array of WithdrawalV1 parentBeaconBlockRoot: DATA (32 bytes) transactions: array of DATA noTxPool: bool gasLimit: QUANTITY or null eip1559Params: DATA (8 bytes) or null minBaseFee: QUANTITY or null } ``` The requirements of this object are the same as extended [`PayloadAttributesV2`](#extended-payloadattributesv2) with the addition of `parentBeaconBlockRoot` which is the parent beacon block root from the L1 origin block of the L2 block. Starting at Ecotone, the `parentBeaconBlockRoot` must be set to the L1 origin `parentBeaconBlockRoot`, or a zero `bytes32` if the Dencun functionality with `parentBeaconBlockRoot` is not active on L1. Starting with Holocene, the `eip1559Params` field must encode the EIP1559 parameters. It must be `null` before. See [Dynamic EIP-1559 Parameters](../../upgrades/holocene/exec-engine.md#dynamic-eip-1559-parameters) for details. Starting with Jovian, the `minBaseFee` field is added. It must be `null` before Jovian. See [Jovian Minimum Base Fee](../../upgrades/jovian/exec-engine.md#minimum-base-fee) for details. #### `engine_newPayloadV2` No modifications to [`engine_newPayloadV2`][engine_newPayloadV2]. Applies a L2 block to the engine state. #### `engine_newPayloadV3` [`engine_newPayloadV3`][engine_newPayloadV3] applies an Ecotone L2 block to the engine state. There are no modifications to this API. `engine_newPayloadV3` **must only be called with Ecotone payload.** The additional parameters should be set as follows: * `expectedBlobVersionedHashes` MUST be an empty array. * `parentBeaconBlockRoot` MUST be the parent beacon block root from the L1 origin block of the L2 block. #### `engine_newPayloadV4` [`engine_newPayloadV4`][engine_newPayloadV4] applies an Isthmus L2 block to the engine state. The `ExecutionPayload` parameter will contain an extra field, `withdrawalsRoot`, after the Isthmus hardfork. `engine_newPayloadV4` **must only be called with Isthmus payload.** The additional parameters should be set as follows: * `executionRequests` MUST be an empty array. #### `engine_getPayloadV2` No modifications to [`engine_getPayloadV2`][engine_getPayloadV2]. Retrieves a payload by ID, prepared by `engine_forkchoiceUpdatedV2` when called with `payloadAttributes`. #### `engine_getPayloadV3` [`engine_getPayloadV3`][engine_getPayloadV3] retrieves a payload by ID, prepared by `engine_forkchoiceUpdatedV3` when called with `payloadAttributes`. `engine_getPayloadV3` **must only be called with Ecotone payload.** ##### Extended Response The [response][GetPayloadV3Response] is extended to: ```js { executionPayload: ExecutionPayload blockValue: QUANTITY blobsBundle: BlobsBundle shouldOverrideBuilder: BOOLEAN parentBeaconBlockRoot: DATA (32 bytes) } ``` [GetPayloadV3Response]: https://github.com/ethereum/execution-apis/blob/main/src/engine/cancun.md#response-2 In Ecotone it MUST be set to the parentBeaconBlockRoot from the L1 Origin block of the L2 block. #### `engine_getPayloadV4` [`engine_getPayloadV4`][engine_getPayloadV4] retrieves a payload by ID, prepared by `engine_forkchoiceUpdatedV3` when called with `payloadAttributes`. `engine_getPayloadV4` **must only be called with Isthmus payload.** #### `engine_signalSuperchainV1` Optional extension to the Engine API. Signals superchain information to the Engine: V1 signals which protocol version is recommended and required. Types: ```javascript SuperchainSignal: { recommended: ProtocolVersion; required: ProtocolVersion; } ``` `ProtocolVersion`: encoded for RPC as defined in the protocol version format specification. Parameters: * `signal`: `SuperchainSignal`, the signaled superchain information. Returns: * `ProtocolVersion`: the latest supported Base protocol version of the execution engine. The execution engine SHOULD warn the user when the recommended version is newer than the current version supported by the execution engine. The execution engine SHOULD take safety precautions if it does not meet the required protocol version. This may include halting the engine, with consent of the execution engine operator. ### Networking The execution engine can acquire all data through the rollup node, as derived from L1: *P2P networking is strictly optional.* However, to not bottleneck on L1 data retrieval speed, the P2P network functionality SHOULD be enabled, serving: * Peer discovery ([Disc v5][discv5]) * [`eth/66`][eth66]: * Transaction pool (consumed by sequencer nodes) * State sync (happy-path for fast trustless db replication) * Historical block header and body retrieval * *New blocks are acquired through the consensus layer instead (rollup node)* No modifications to L1 network functionality are required, except configuration: * [`networkID`][network-id]: Distinguishes the L2 network from L1 and testnets. Equal to the [`chainID`][chain-id] of the rollup network. * Activate Merge fork: Enables Engine API and disables propagation of blocks, as block headers cannot be authenticated without consensus layer. * Bootnode list: DiscV5 is a shared network, [bootstrap][discv5-rationale] is faster through connecting with L2 nodes first. [discv5]: https://github.com/ethereum/devp2p/blob/master/discv5/discv5.md [eth66]: https://github.com/ethereum/devp2p/blob/master/caps/eth.md [network-id]: https://github.com/ethereum/devp2p/blob/master/caps/eth.md#status-0x00 [chain-id]: https://github.com/ethereum/EIPs/blob/master/EIPS/eip-155.md [discv5-rationale]: https://github.com/ethereum/devp2p/blob/master/discv5/discv5-rationale.md ### Sync The execution engine can operate sync in different ways: * Happy-path: rollup node informs engine of the desired chain head as determined by L1, completes through engine P2P. * Worst-case: rollup node detects stalled engine, completes sync purely from L1 data, no peers required. The happy-path is more suitable to bring new nodes online quickly, as the engine implementation can sync state faster through methods like [snap-sync][snap-sync]. [snap-sync]: https://github.com/ethereum/devp2p/blob/master/caps/snap.md #### Happy-path sync 1. The rollup node informs the engine of the L2 chain head, unconditionally (part of regular node operation): * Bedrock / Canyon / Delta Payloads * [`engine_newPayloadV2`][engine_newPayloadV2] is called with latest L2 block received from P2P. * [`engine_forkchoiceUpdatedV2`][engine_forkchoiceUpdatedV2] is called with the current `unsafe`/`safe`/`finalized` L2 block hashes. * Ecotone Payloads * [`engine_newPayloadV3`][engine_newPayloadV3] is called with latest L2 block received from P2P. * [`engine_forkchoiceUpdatedV3`][engine_forkchoiceUpdatedV3] is called with the current `unsafe`/`safe`/`finalized` L2 block hashes. 2. The engine requests headers from peers, in reverse till the parent hash matches the local chain 3. The engine catches up: a) A form of state sync is activated towards the finalized or head block hash b) A form of block sync pulls block bodies and processes towards head block hash The exact P2P based sync is out of scope for the L2 specification: the operation within the engine is the exact same as with L1 (although with an EVM that supports deposits). #### Worst-case sync 1. Engine is out of sync, not peered and/or stalled due other reasons. 2. The rollup node maintains latest head from engine (poll `eth_getBlockByNumber` and/or maintain a header subscription) 3. The rollup node activates sync if the engine is out of sync but not syncing through P2P (`eth_syncing`) 4. The rollup node inserts blocks, derived from L1, one by one, potentially adapting to L1 reorg(s), as outlined in the [rollup node spec]. [rollup node spec]: ../consensus/index.md ### Ecotone: disable Blob-transactions [EIP-4844] introduces Blob transactions: featuring all the functionality of an [EIP-1559] transaction, plus a list of "blobs": "Binary Large Object", i.e. a dedicated data type for serving Data-Availability as base-layer. With the Ecotone upgrade, all Cancun L1 execution features are enabled, with [EIP-4844] as exception: as an L2, Base does not serve blobs, and thus disables this new transaction type. EIP-4844 is disabled as following: * Transaction network-layer announcements, announcing blob-type transactions, are ignored. * Transactions of the blob-type, through the RPC or otherwise, are not allowed into the transaction pool. * Block-building code does not select EIP-4844 transactions. * An L2 block state-transition with EIP-4844 transactions is invalid. The [BLOBBASEFEE opcode](https://eips.ethereum.org/EIPS/eip-7516) is present but its semantics are altered because there are no blobs processed by L2. The opcode will always push a value of 1 onto the stack. ### Ecotone: Beacon Block Root [EIP-4788] introduces a "beacon block root" into the execution-layer block-header and EVM. This block root is an [SSZ hash-tree-root] of the consensus-layer contents of the previous consensus block. With the adoption of [EIP-4399] in the Bedrock upgrade the Base already includes the `PREVRANDAO` of L1. And thus with [EIP-4788] the L1 beacon block root is made available. For the Ecotone upgrade, this entails that: * The `parent_beacon_block_root` of the L1 origin is now embedded in the L2 block header. * The "Beacon roots contract" is deployed at Ecotone upgrade-time, or embedded at genesis if activated at genesis. * The block state-transition process now includes the same special beacon-block-root EVM processing as L1 ethereum. [SSZ hash-tree-root]: https://github.com/ethereum/consensus-specs/blob/master/ssz/simple-serialize.md#merkleization [EIP-4399]: https://eips.ethereum.org/EIPS/eip-4399 [EIP-4788]: https://eips.ethereum.org/EIPS/eip-4788 [EIP-4844]: https://eips.ethereum.org/EIPS/eip-4844 [eip-1559]: https://eips.ethereum.org/EIPS/eip-1559 [eip-2028]: https://eips.ethereum.org/EIPS/eip-2028 [eip-2718]: https://eips.ethereum.org/EIPS/eip-2718 [eip-2718-transactions]: https://eips.ethereum.org/EIPS/eip-2718#transactions [PayloadAttributesV3]: https://github.com/ethereum/execution-apis/blob/cea7eeb642052f4c2e03449dc48296def4aafc24/src/engine/cancun.md#payloadattributesv3 [PayloadAttributesV2]: https://github.com/ethereum/execution-apis/blob/584905270d8ad665718058060267061ecfd79ca5/src/engine/shanghai.md#PayloadAttributesV2 [ExecutionPayloadV2]: https://github.com/ethereum/execution-apis/blob/main/src/engine/shanghai.md#executionpayloadv2 [engine_forkchoiceUpdatedV3]: https://github.com/ethereum/execution-apis/blob/cea7eeb642052f4c2e03449dc48296def4aafc24/src/engine/cancun.md#engine_forkchoiceupdatedv3 [engine_forkchoiceUpdatedV2]: https://github.com/ethereum/execution-apis/blob/584905270d8ad665718058060267061ecfd79ca5/src/engine/shanghai.md#engine_forkchoiceupdatedv2 [engine_newPayloadV2]: https://github.com/ethereum/execution-apis/blob/584905270d8ad665718058060267061ecfd79ca5/src/engine/shanghai.md#engine_newpayloadv2 [engine_newPayloadV3]: https://github.com/ethereum/execution-apis/blob/cea7eeb642052f4c2e03449dc48296def4aafc24/src/engine/cancun.md#engine_newpayloadv3 [engine_newPayloadV4]: https://github.com/ethereum/execution-apis/blob/869b7f062830ba51a7fd8a51dfa4678c6d36b6ec/src/engine/prague.md#engine_newpayloadv4 [engine_getPayloadV2]: https://github.com/ethereum/execution-apis/blob/584905270d8ad665718058060267061ecfd79ca5/src/engine/shanghai.md#engine_getpayloadv2 [engine_getPayloadV3]: https://github.com/ethereum/execution-apis/blob/a0d03086564ab1838b462befbc083f873dcf0c0f/src/engine/cancun.md#engine_getpayloadv3 [engine_getPayloadV4]: https://github.com/ethereum/execution-apis/blob/869b7f062830ba51a7fd8a51dfa4678c6d36b6ec/src/engine/prague.md#engine_getpayloadv4 [HEX value encoding]: https://ethereum.org/en/developers/docs/apis/json-rpc/#hex-encoding [JSON-RPC-API]: https://github.com/ethereum/execution-apis ### P2P Modifications The Ethereum Node Record (ENR) for an Optimism execution node must contain an `opel` key-value pair where the key is `opel` and the value is a [EIP-2124](https://eips.ethereum.org/EIPS/eip-2124) fork id. The EL uses a different key from the CL in order to stop EL and CL nodes from connecting to each other. ## Precompiles ### Overview [Precompiled contracts](../../../reference/glossary.md#precompiled-contract-precompile) exist on Base at predefined addresses. They are similar to predeploys but are implemented as native code in the EVM as opposed to bytecode. Precompiles are used for computationally expensive operations, that would be cost prohibitive to implement in Solidity. Where possible predeploys are preferred, as precompiles must be implemented in every execution client. Base contains the [standard Ethereum precompiles](https://www.evm.codes/precompiled) as well as a small number of additional precompiles. The following table lists each of the additional precompiles. The system version indicates when the precompile was introduced. | Name | Address | Introduced | | ---------- | ------------------------------------------ | ---------- | | P256VERIFY | 0x0000000000000000000000000000000000000100 | Fjord | ### P256VERIFY The `P256VERIFY` precompile performs signature verification for the secp256r1 elliptic curve. This curve has widespread adoption. It's used by Passkeys, Apple Secure Enclave and many other systems. It is specified as part of [RIP-7212](https://github.com/ethereum/RIPs/blob/master/RIPS/rip-7212.md) and was added to the Base protocol in the Fjord release. The op-geth implementation is [here](https://github.com/ethereum-optimism/op-geth/blob/optimism/core/vm/contracts.go#L1161-L1193). Address: `0x0000000000000000000000000000000000000100` ## Predeploys ### Overview [Predeployed smart contracts](../../../reference/glossary.md#predeployed-contract-predeploy) exist on Optimism at predetermined addresses in the genesis state. They are similar to precompiles but instead run directly in the EVM instead of running native code outside of the EVM. Predeploys are used instead of precompiles to make it easier for multiclient implementations as well as allowing for more integration with hardhat/foundry network forking. Predeploy addresses exist in a prefixed namespace `0x4200000000000000000000000000000000000xxx`. Proxies are set at the first 2048 addresses in the namespace, except for the address reserved for the `WETH` predeploy. The `LegacyERC20ETH` predeploy lives at a special address `0xDeadDeAddeAddEAddeadDEaDDEAdDeaDDeAD0000` and there is no proxy deployed at that account. The following table includes each of the predeploys. The system version indicates when the predeploy was introduced. The possible values are `Legacy` or `Bedrock` or `Canyon`. Deprecated contracts should not be used. | Name | Address | Introduced | Deprecated | Proxied | | ----------------------------- | ------------------------------------------ | ---------- | ---------- | ------- | | LegacyMessagePasser | 0x4200000000000000000000000000000000000000 | Legacy | Yes | Yes | | DeployerWhitelist | 0x4200000000000000000000000000000000000002 | Legacy | Yes | Yes | | LegacyERC20ETH | 0xDeadDeAddeAddEAddeadDEaDDEAdDeaDDeAD0000 | Legacy | Yes | No | | WETH9 | 0x4200000000000000000000000000000000000006 | Legacy | No | No | | L2CrossDomainMessenger | 0x4200000000000000000000000000000000000007 | Legacy | No | Yes | | L2StandardBridge | 0x4200000000000000000000000000000000000010 | Legacy | No | Yes | | SequencerFeeVault | 0x4200000000000000000000000000000000000011 | Legacy | No | Yes | | OptimismMintableERC20Factory | 0x4200000000000000000000000000000000000012 | Legacy | No | Yes | | L1BlockNumber | 0x4200000000000000000000000000000000000013 | Legacy | Yes | Yes | | GasPriceOracle | 0x420000000000000000000000000000000000000F | Legacy | No | Yes | | L1Block | 0x4200000000000000000000000000000000000015 | Bedrock | No | Yes | | L2ToL1MessagePasser | 0x4200000000000000000000000000000000000016 | Bedrock | No | Yes | | L2ERC721Bridge | 0x4200000000000000000000000000000000000014 | Legacy | No | Yes | | OptimismMintableERC721Factory | 0x4200000000000000000000000000000000000017 | Bedrock | No | Yes | | ProxyAdmin | 0x4200000000000000000000000000000000000018 | Bedrock | No | Yes | | BaseFeeVault | 0x4200000000000000000000000000000000000019 | Bedrock | No | Yes | | L1FeeVault | 0x420000000000000000000000000000000000001a | Bedrock | No | Yes | | SchemaRegistry | 0x4200000000000000000000000000000000000020 | Bedrock | No | Yes | | EAS | 0x4200000000000000000000000000000000000021 | Bedrock | No | Yes | | BeaconBlockRoot | 0x000F3df6D732807Ef1319fB7B8bB8522d0Beac02 | Ecotone | No | No | | OperatorFeeVault | 0x420000000000000000000000000000000000001B | Isthmus | No | Yes | ### LegacyMessagePasser [Implementation](https://github.com/ethereum-optimism/optimism/blob/develop/packages/contracts-bedrock/src/legacy/LegacyMessagePasser.sol) Address: `0x4200000000000000000000000000000000000000` The `LegacyMessagePasser` contract stores commitments to withdrawal transactions before the Bedrock upgrade. A merkle proof to a particular storage slot that commits to the withdrawal transaction is used as part of the withdrawing transaction on L1. The expected account that includes the storage slot is hardcoded into the L1 logic. After the bedrock upgrade, the `L2ToL1MessagePasser` is used instead. Finalizing withdrawals from this contract will no longer be supported after the Bedrock and is only left to allow for alternative bridges that may depend on it. This contract does not forward calls to the `L2ToL1MessagePasser` and calling it is considered a no-op in context of doing withdrawals through the `CrossDomainMessenger` system. Any pending withdrawals that have not been finalized are migrated to the `L2ToL1MessagePasser` as part of the upgrade so that they can still be finalized. ### L2ToL1MessagePasser [Implementation](https://github.com/ethereum-optimism/optimism/blob/develop/packages/contracts-bedrock/src/L2/L2ToL1MessagePasser.sol) Address: `0x4200000000000000000000000000000000000016` The `L2ToL1MessagePasser` stores commitments to withdrawal transactions. When a user is submitting the withdrawing transaction on L1, they provide a proof that the transaction that they withdrew on L2 is in the `sentMessages` mapping of this contract. Any withdrawn ETH accumulates into this contract on L2 and can be permissionlessly removed from the L2 supply by calling the `burn()` function. ### DeployerWhitelist [Implementation](https://github.com/ethereum-optimism/optimism/blob/develop/packages/contracts-bedrock/src/legacy/DeployerWhitelist.sol) Address: `0x4200000000000000000000000000000000000002` The `DeployerWhitelist` is a predeploy that was used to provide additional safety during the initial phases of Optimism. It previously defined the accounts that are allowed to deploy contracts to the network. Arbitrary contract deployment was subsequently enabled and it is not possible to turn off. In the legacy system, this contract was hooked into `CREATE` and `CREATE2` to ensure that the deployer was allowlisted. In the Bedrock system, this contract will no longer be used as part of the `CREATE` codepath. This contract is deprecated and its usage should be avoided. ### LegacyERC20ETH [Implementation](https://github.com/ethereum-optimism/optimism/blob/a4524ac152b4c9e8eb80beadc9cd772b96243aa2/packages/contracts-bedrock/src/legacy/LegacyERC20ETH.sol) Address: `0xDeadDeAddeAddEAddeadDEaDDEAdDeaDDeAD0000` The `LegacyERC20ETH` predeploy represents all ether in the system before the Bedrock upgrade. All ETH was represented as an ERC20 token and users could opt into the ERC20 interface or the native ETH interface. The upgrade to Bedrock migrates all ether out of this contract and moves it to its native representation. All of the stateful methods in this contract will revert after the Bedrock upgrade. This contract is deprecated and its usage should be avoided. ### WETH9 [Implementation](https://github.com/ethereum-optimism/optimism/blob/2b1c99b39744579cc226077d356ae9e5f162db4a/packages/contracts-bedrock/src/vendor/WETH9.sol) Address: `0x4200000000000000000000000000000000000006` `WETH9` is the standard implementation of Wrapped Ether on Optimism. It is a commonly used contract and is placed as a predeploy so that it is at a deterministic address across Optimism based networks. ### L2CrossDomainMessenger [Implementation](https://github.com/ethereum-optimism/optimism/blob/develop/packages/contracts-bedrock/src/L2/L2CrossDomainMessenger.sol) Address: `0x4200000000000000000000000000000000000007` The `L2CrossDomainMessenger` gives a higher level API for sending cross domain messages compared to directly calling the `L2ToL1MessagePasser`. It maintains a mapping of L1 messages that have been relayed to L2 to prevent replay attacks and also allows for replayability if the L1 to L2 transaction reverts on L2. Any calls to the `L1CrossDomainMessenger` on L1 are serialized such that they go through the `L2CrossDomainMessenger` on L2. The `relayMessage` function executes a transaction from the remote domain while the `sendMessage` function sends a transaction to be executed on the remote domain through the remote domain's `relayMessage` function. ### L2StandardBridge [Implementation](https://github.com/ethereum-optimism/optimism/blob/develop/packages/contracts-bedrock/src/L2/L2StandardBridge.sol) Address: `0x4200000000000000000000000000000000000010` The `L2StandardBridge` is a higher level API built on top of the `L2CrossDomainMessenger` that gives a standard interface for sending ETH or ERC20 tokens across domains. To deposit a token from L1 to L2, the `L1StandardBridge` locks the token and sends a cross domain message to the `L2StandardBridge` which then mints the token to the specified account. To withdraw a token from L2 to L1, the user will burn the token on L2 and the `L2StandardBridge` will send a message to the `L1StandardBridge` which will unlock the underlying token and transfer it to the specified account. The `OptimismMintableERC20Factory` can be used to create an ERC20 token contract on a remote domain that maps to an ERC20 token contract on the local domain where tokens can be deposited to the remote domain. It deploys an `OptimismMintableERC20` which has the interface that works with the `StandardBridge`. This contract can also be deployed on L1 to allow for L2 native tokens to be withdrawn to L1. ### L1BlockNumber [Implementation](https://github.com/ethereum-optimism/optimism/blob/develop/packages/contracts-bedrock/src/legacy/L1BlockNumber.sol) Address: `0x4200000000000000000000000000000000000013` The `L1BlockNumber` returns the last known L1 block number. This contract was introduced in the legacy system and should be backwards compatible by calling out to the `L1Block` contract under the hood. It is recommended to use the `L1Block` contract for getting information about L1 on L2. ### GasPriceOracle [Implementation](https://github.com/ethereum-optimism/optimism/blob/develop/packages/contracts-bedrock/src/L2/GasPriceOracle.sol) Address: `0x420000000000000000000000000000000000000F` In the legacy system, the `GasPriceOracle` was a permissioned contract that was pushed the L1 base fee and the L2 gas price by an offchain actor. The offchain actor observes the L1 blockheaders to get the L1 base fee as well as the gas usage on L2 to compute what the L2 gas price should be based on a congestion control algorithm. After Bedrock, the `GasPriceOracle` is no longer a permissioned contract and only exists to preserve the API for offchain gas estimation. The function `getL1Fee(bytes)` accepts an unsigned RLP transaction and will return the L1 portion of the fee. This fee pays for using L1 as a data availability layer and should be added to the L2 portion of the fee, which pays for execution, to compute the total transaction fee. The values used to compute the L1 portion of the fee prior to the Ecotone upgrade are: * scalar * overhead * decimals After the Bedrock upgrade, these values are instead managed by the `SystemConfig` contract on L1. The `scalar` and `overhead` values are sent to the `L1Block` contract each block and the `decimals` value has been hardcoded to 6. Following the Ecotone upgrade, the values used for L1 fee computation are: * baseFeeScalar * blobBaseFeeScalar * decimals [ecotone-scalars]: ../../../reference/glossary.md#post-ecotone-parameters These new scalar values are managed by the `SystemConfig` contract on the L1 by introducing a backwards compatible [versioned encoding scheme][ecotone-scalars] of its `scalars` storage slot. The `decimals` remains hardcoded to 6, and the `overhead` value is ignored. ### L1Block [Implementation](https://github.com/ethereum-optimism/optimism/blob/develop/packages/contracts-bedrock/src/L2/L1Block.sol) Address: `0x4200000000000000000000000000000000000015` [l1-block-predeploy]: ../../../reference/glossary.md#l1-attributes-predeployed-contract The [L1Block][l1-block-predeploy] was introduced in Bedrock and is responsible for maintaining L1 context in L2. This allows for L1 state to be accessed in L2. ### ProxyAdmin [ProxyAdmin](https://github.com/ethereum-optimism/optimism/blob/develop/packages/contracts-bedrock/src/universal/ProxyAdmin.sol) Address: `0x4200000000000000000000000000000000000018` The `ProxyAdmin` is the owner of all of the proxy contracts set at the predeploys. It is itself behind a proxy. The owner of the `ProxyAdmin` will have the ability to upgrade any of the other predeploy contracts. ### SequencerFeeVault [Implementation](https://github.com/ethereum-optimism/optimism/blob/develop/packages/contracts-bedrock/src/L2/SequencerFeeVault.sol) Address: `0x4200000000000000000000000000000000000011` The `SequencerFeeVault` accumulates any transaction priority fee and is the value of `block.coinbase`. When enough fees accumulate in this account, they can be withdrawn to an immutable L1 address. To change the L1 address that fees are withdrawn to, the contract must be upgraded by changing its proxy's implementation key. ### OptimismMintableERC20Factory [Implementation](https://github.com/ethereum-optimism/optimism/blob/develop/packages/contracts-bedrock/src/universal/OptimismMintableERC20Factory.sol) Address: `0x4200000000000000000000000000000000000012` The `OptimismMintableERC20Factory` is responsible for creating ERC20 contracts on L2 that can be used for depositing native L1 tokens into. These ERC20 contracts can be created permissionlessly and implement the interface required by the `StandardBridge` to just work with deposits and withdrawals. Each ERC20 contract that is created by the `OptimismMintableERC20Factory` allows for the `L2StandardBridge` to mint and burn tokens, depending on if the user is depositing from L1 to L2 or withdrawing from L2 to L1. ### OptimismMintableERC721Factory [Implementation](https://github.com/ethereum-optimism/optimism/blob/develop/packages/contracts-bedrock/src/L2/OptimismMintableERC721Factory.sol) Address: `0x4200000000000000000000000000000000000017` The `OptimismMintableERC721Factory` is responsible for creating ERC721 contracts on L2 that can be used for depositing native L1 NFTs into. ### BaseFeeVault [Implementation](https://github.com/ethereum-optimism/optimism/blob/develop/packages/contracts-bedrock/src/L2/BaseFeeVault.sol) Address: `0x4200000000000000000000000000000000000019` The `BaseFeeVault` predeploy receives the base fees on L2. The base fee is not burnt on L2 like it is on L1. Once the contract has received a certain amount of fees, the ETH can be withdrawn to an immutable address on L1. ### L1FeeVault [Implementation](https://github.com/ethereum-optimism/optimism/blob/develop/packages/contracts-bedrock/src/L2/L1FeeVault.sol) Address: `0x420000000000000000000000000000000000001a` The `L1FeeVault` predeploy receives the L1 portion of the transaction fees. Once the contract has received a certain amount of fees, the ETH can be withdrawn to an immutable address on L1. ### SchemaRegistry [Implementation](https://github.com/ethereum-optimism/optimism/blob/develop/packages/contracts-bedrock/src/vendor/eas/SchemaRegistry.sol) Address: `0x4200000000000000000000000000000000000020` The `SchemaRegistry` predeploy implements the global attestation schemas for the `Ethereum Attestation Service` protocol. ### EAS [Implementation](https://github.com/ethereum-optimism/optimism/tree/develop/packages/contracts-bedrock/src/vendor/eas) Address: `0x4200000000000000000000000000000000000021` The `EAS` predeploy implements the `Ethereum Attestation Service` protocol. ### Beacon Block Root Address: `0x000F3df6D732807Ef1319fB7B8bB8522d0Beac02` The `BeaconBlockRoot` predeploy provides access to the L1 beacon block roots. This was added during the Ecotone network upgrade and is specified in [EIP-4788](https://eips.ethereum.org/EIPS/eip-4788). ### Operator Fee Vault [Implementation](https://github.com/ethereum-optimism/optimism/blob/develop/packages/contracts-bedrock/src/L2/OperatorFeeVault.sol) Address: `0x420000000000000000000000000000000000001B` See [Operator Fee Vault](https://specs.optimism.io/protocol/isthmus/predeploys.html#operatorfeevault) spec. ## Preinstalls ### Overview [Preinstalled smart contracts](../../../reference/glossary.md#preinstalled-contract-preinstall) exist on Optimism at predetermined addresses in the genesis state. They are similar to precompiles but instead run directly in the EVM instead of running native code outside of the EVM and are developed by third parties unaffiliated with the Optimism Collective. These preinstalls are commonly deployed smart contracts that are being placed at genesis for convenience. It's important to note that these contracts do not have the same security guarantees as [Predeployed smart contracts](../../../reference/glossary.md#predeployed-contract-predeploy). The following table includes each of the preinstalls. | Name | Address | | ----------------------------------------- | ------------------------------------------ | | Safe | 0x69f4D1788e39c87893C980c06EdF4b7f686e2938 | | SafeL2 | 0xfb1bffC9d739B8D520DaF37dF666da4C687191EA | | MultiSend | 0x998739BFdAAdde7C933B942a68053933098f9EDa | | MultiSendCallOnly | 0xA1dabEF33b3B82c7814B6D82A79e50F4AC44102B | | SafeSingletonFactory | 0x914d7Fec6aaC8cd542e72Bca78B30650d45643d7 | | Multicall3 | 0xcA11bde05977b3631167028862bE2a173976CA11 | | Create2Deployer | 0x13b0D85CcB8bf860b6b79AF3029fCA081AE9beF2 | | CreateX | 0xba5Ed099633D3B313e4D5F7bdc1305d3c28ba5Ed | | Arachnid's Deterministic Deployment Proxy | 0x4e59b44847b379578588920cA78FbF26c0B4956C | | Permit2 | 0x000000000022D473030F116dDEE9F6B43aC78BA3 | | ERC-4337 v0.6.0 EntryPoint | 0x5FF137D4b0FDCD49DcA30c7CF57E578a026d2789 | | ERC-4337 v0.6.0 SenderCreator | 0x7fc98430eaedbb6070b35b39d798725049088348 | | ERC-4337 v0.7.0 EntryPoint | 0x0000000071727De22E5E9d8BAf0edAc6f37da032 | | ERC-4337 v0.7.0 SenderCreator | 0xEFC2c1444eBCC4Db75e7613d20C6a62fF67A167C | ### Safe [Implementation](https://github.com/safe-global/safe-contracts/blob/v1.3.0/contracts/GnosisSafe.sol) Address: `0x69f4D1788e39c87893C980c06EdF4b7f686e2938` A multisignature wallet with support for confirmations using signed messages based on ERC191. Differs from [SafeL2](#safel2) by not emitting events to save gas. ### SafeL2 [Implementation](https://github.com/safe-global/safe-contracts/blob/v1.3.0/contracts/GnosisSafeL2.sol) Address: `0xfb1bffC9d739B8D520DaF37dF666da4C687191EA` A multisignature wallet with support for confirmations using signed messages based on ERC191. Differs from [Safe](#safe) by emitting events. ### MultiSend [Implementation](https://github.com/safe-global/safe-contracts/blob/v1.3.0/contracts/libraries/MultiSend.sol) Address: `0x998739BFdAAdde7C933B942a68053933098f9EDa` Allows to batch multiple transactions into one. ### MultiSendCallOnly [Implementation](https://github.com/safe-global/safe-contracts/blob/v1.3.0/contracts/libraries/MultiSendCallOnly.sol) Address: `0xA1dabEF33b3B82c7814B6D82A79e50F4AC44102B` Allows to batch multiple transactions into one, but only calls. ### SafeSingletonFactory [Implementation](https://github.com/safe-global/safe-singleton-factory/blob/v1.0.17/source/deterministic-deployment-proxy.yul) Address: `0x914d7Fec6aaC8cd542e72Bca78B30650d45643d7` Singleton factory used by Safe-related contracts based on [Arachnid's Deterministic Deployment Proxy](#arachnids-deterministic-deployment-proxy). The original library used a pre-signed transaction without a chain ID to allow deployment on different chains. Some chains do not allow such transactions to be submitted; therefore, this contract will provide the same factory that can be deployed via a pre-signed transaction that includes the chain ID. The key that is used to sign is controlled by the Safe team. ### Multicall3 [Implementation](https://github.com/mds1/multicall/blob/v3.1.0/src/Multicall3.sol) Address: `0xcA11bde05977b3631167028862bE2a173976CA11` `Multicall3` has two main use cases: * Aggregate results from multiple contract reads into a single JSON-RPC request. * Execute multiple state-changing calls in a single transaction. ### Create2Deployer [Implementation](https://github.com/mdehoog/create2deployer/blob/69b9a8e112b15f9257ce8c62b70a09914e7be29c/contracts/Create2Deployer.sol) The `create2Deployer` is a nice Solidity wrapper around the CREATE2 opcode. It provides the following ABI. ```solidity /** * @dev Deploys a contract using `CREATE2`. The address where the * contract will be deployed can be known in advance via {computeAddress}. * * The bytecode for a contract can be obtained from Solidity with * `type(contractName).creationCode`. * * Requirements: * - `bytecode` must not be empty. * - `salt` must have not been used for `bytecode` already. * - the factory must have a balance of at least `value`. * - if `value` is non-zero, `bytecode` must have a `payable` constructor. */ function deploy(uint256 value, bytes32 salt, bytes memory code) public; /** * @dev Deployment of the {ERC1820Implementer}. * Further information: https://eips.ethereum.org/EIPS/eip-1820 */ function deployERC1820Implementer(uint256 value, bytes32 salt); /** * @dev Returns the address where a contract will be stored if deployed via {deploy}. * Any change in the `bytecodeHash` or `salt` will result in a new destination address. */ function computeAddress(bytes32 salt, bytes32 codeHash) public view returns (address); /** * @dev Returns the address where a contract will be stored if deployed via {deploy} from a * contract located at `deployer`. If `deployer` is this contract's address, returns the * same value as {computeAddress}. */ function computeAddressWithDeployer( bytes32 salt, bytes32 codeHash, address deployer ) public pure returns (address); ``` Address: `0x13b0D85CcB8bf860b6b79AF3029fCA081AE9beF2` When Canyon activates, the contract code at `0x13b0D85CcB8bf860b6b79AF3029fCA081AE9beF2` is set to `0x6080604052600436106100435760003560e01c8063076c37b21461004f578063481286e61461007157806356299481146100ba57806366cfa057146100da57600080fd5b3661004a57005b600080fd5b34801561005b57600080fd5b5061006f61006a366004610327565b6100fa565b005b34801561007d57600080fd5b5061009161008c366004610327565b61014a565b60405173ffffffffffffffffffffffffffffffffffffffff909116815260200160405180910390f35b3480156100c657600080fd5b506100916100d5366004610349565b61015d565b3480156100e657600080fd5b5061006f6100f53660046103ca565b610172565b61014582826040518060200161010f9061031a565b7fffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffe082820381018352601f90910116604052610183565b505050565b600061015683836102e7565b9392505050565b600061016a8484846102f0565b949350505050565b61017d838383610183565b50505050565b6000834710156101f4576040517f08c379a000000000000000000000000000000000000000000000000000000000815260206004820152601d60248201527f437265617465323a20696e73756666696369656e742062616c616e636500000060448201526064015b60405180910390fd5b815160000361025f576040517f08c379a000000000000000000000000000000000000000000000000000000000815260206004820181905260248201527f437265617465323a2062797465636f6465206c656e677468206973207a65726f60448201526064016101eb565b8282516020840186f5905073ffffffffffffffffffffffffffffffffffffffff8116610156576040517f08c379a000000000000000000000000000000000000000000000000000000000815260206004820152601960248201527f437265617465323a204661696c6564206f6e206465706c6f790000000000000060448201526064016101eb565b60006101568383305b6000604051836040820152846020820152828152600b8101905060ff815360559020949350505050565b61014e806104ad83390190565b6000806040838503121561033a57600080fd5b50508035926020909101359150565b60008060006060848603121561035e57600080fd5b8335925060208401359150604084013573ffffffffffffffffffffffffffffffffffffffff8116811461039057600080fd5b809150509250925092565b7f4e487b7100000000000000000000000000000000000000000000000000000000600052604160045260246000fd5b6000806000606084860312156103df57600080fd5b8335925060208401359150604084013567ffffffffffffffff8082111561040557600080fd5b818601915086601f83011261041957600080fd5b81358181111561042b5761042b61039b565b604051601f82017fffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffe0908116603f011681019083821181831017156104715761047161039b565b8160405282815289602084870101111561048a57600080fd5b826020860160208301376000602084830101528095505050505050925092509256fe608060405234801561001057600080fd5b5061012e806100206000396000f3fe6080604052348015600f57600080fd5b506004361060285760003560e01c8063249cb3fa14602d575b600080fd5b603c603836600460b1565b604e565b60405190815260200160405180910390f35b60008281526020818152604080832073ffffffffffffffffffffffffffffffffffffffff8516845290915281205460ff16608857600060aa565b7fa2ef4600d742022d532d4747cb3547474667d6f13804902513b2ec01c848f4b45b9392505050565b6000806040838503121560c357600080fd5b82359150602083013573ffffffffffffffffffffffffffffffffffffffff8116811460ed57600080fd5b80915050925092905056fea26469706673582212205ffd4e6cede7d06a5daf93d48d0541fc68189eeb16608c1999a82063b666eb1164736f6c63430008130033a2646970667358221220fdc4a0fe96e3b21c108ca155438d37c9143fb01278a3c1d274948bad89c564ba64736f6c63430008130033`. ### CreateX [Implementation](https://github.com/pcaversaccio/createx/blob/main/src/CreateX.sol) Address: `0xba5Ed099633D3B313e4D5F7bdc1305d3c28ba5Ed` CreateX introduces additional logic for deploying contracts using `CREATE`, `CREATE2` and `CREATE3`. It adds [salt protection](https://github.com/pcaversaccio/createx#special-features) for sender and chainID and includes a set of helper functions. The `keccak256` of the CreateX bytecode is `0xbd8a7ea8cfca7b4e5f5041d7d4b17bc317c5ce42cfbc42066a00cf26b43eb53f`. ### Arachnid's Deterministic Deployment Proxy [Implementation](https://github.com/Arachnid/deterministic-deployment-proxy/blob/v1.0.0/source/deterministic-deployment-proxy.yul) Address: `0x4e59b44847b379578588920cA78FbF26c0B4956C` This contract can deploy other contracts with a deterministic address on any chain using `CREATE2`. The `CREATE2` call will deploy a contract (like `CREATE` opcode) but instead of the address being `keccak256(rlp([deployer_address, nonce]))` it instead uses the hash of the contract's bytecode and a salt. This means that a given deployer address will deploy the same code to the same address no matter when or where they issue the deployment. The deployer is deployed with a one-time-use account, so no matter what chain the deployer is on, its address will always be the same. This means the only variables in determining the address of your contract are its bytecode hash and the provided salt. Between the use of `CREATE2` opcode and the one-time-use account for the deployer, this contracts ensures that a given contract will exist at the exact same address on every chain, but without having to use the same gas pricing or limits every time. ### Permit2 [Implementation](https://github.com/Uniswap/permit2/blob/0x000000000022D473030F116dDEE9F6B43aC78BA3/src/Permit2.sol) Address: `0x000000000022D473030F116dDEE9F6B43aC78BA3` Permit2 introduces a low-overhead, next-generation token approval/meta-tx system to make token approvals easier, more secure, and more consistent across applications. ### ERC-4337 v0.6.0 EntryPoint [Implementation](https://github.com/eth-infinitism/account-abstraction/blob/v0.6.0/contracts/core/EntryPoint.sol) Address: `0x5FF137D4b0FDCD49DcA30c7CF57E578a026d2789` This contract verifies and executes the bundles of ERC-4337 v0.6.0 [UserOperations](https://www.erc4337.io/docs/understanding-ERC-4337/user-operation) sent to it. ### ERC-4337 v0.6.0 SenderCreator [Implementation](https://github.com/eth-infinitism/account-abstraction/blob/v0.6.0/contracts/core/SenderCreator.sol) Address: `0x7fc98430eaedbb6070b35b39d798725049088348` Helper contract for [EntryPoint](#erc-4337-v060-entrypoint) v0.6.0, to call `userOp.initCode` from a "neutral" address, which is explicitly not `EntryPoint` itself. ### ERC-4337 v0.7.0 EntryPoint [Implementation](https://github.com/eth-infinitism/account-abstraction/blob/v0.7.0/contracts/core/EntryPoint.sol) Address: `0x0000000071727De22E5E9d8BAf0edAc6f37da032` This contract verifies and executes the bundles of ERC-4337 v0.7.0 [UserOperations](https://www.erc4337.io/docs/understanding-ERC-4337/user-operation) sent to it. ### ERC-4337 v0.7.0 SenderCreator [Implementation](https://github.com/eth-infinitism/account-abstraction/blob/v0.7.0/contracts/core/SenderCreator.sol) Address: `0xEFC2c1444eBCC4Db75e7613d20C6a62fF67A167C` Helper contract for [EntryPoint](#erc-4337-v070-entrypoint) v0.7.0, to call `userOp.initCode` from a "neutral" address, which is explicitly not `EntryPoint` itself. ## RPC This document specifies the JSON-RPC methods implemented by the Flashblocks RPC provider. ### Type Definitions All types used in these RPC methods are identical to the standard Base RPC types. No modifications have been made to the existing type definitions. ### Modified Ethereum JSON-RPC Methods The following standard Ethereum JSON-RPC methods are enhanced to support the `pending` tag for querying preconfirmed state. #### `eth_getBlockByNumber` Returns block information for the specified block number. #### Parameters * `blockNumber`: `String` - Block number or tag (`"pending"` for preconfirmed state) * `fullTransactions`: `Boolean` - If true, returns full transaction objects; if false, returns transaction hashes #### Returns `Object` - Block object #### Example ```json // Request { "method": "eth_getBlockByNumber", "params": ["pending", false], "id": 1, "jsonrpc": "2.0" } // Response { "id": 1, "jsonrpc": "2.0", "result": { "hash": "0x0000000000000000000000000000000000000000000000000000000000000000", "parentHash": "0x...", "stateRoot": "0x...", "transactionsRoot": "0x...", "receiptsRoot": "0x...", "number": "0x123", "gasUsed": "0x5208", "gasLimit": "0x1c9c380", "timestamp": "0x...", "extraData": "0x", "mixHash": "0x...", "nonce": "0x0", "transactions": ["0x..."] } } ``` #### Fields * `hash`: Block hash calculated from the current flashblock header * `parentHash`: Hash of the parent block * `stateRoot`: Current state root from latest flashblock * `transactionsRoot`: Transactions trie root * `receiptsRoot`: Receipts trie root * `number`: Block number being built * `gasUsed`: Cumulative gas used by all transactions * `gasLimit`: Block gas limit * `timestamp`: Block timestamp * `extraData`: Extra data bytes * `mixHash`: Mix hash value * `nonce`: Block nonce value * `transactions`: Array of transaction hashes or objects #### `eth_getTransactionReceipt` Returns the receipt for a transaction. **Parameters:** * `transactionHash`: `String` - Hash of the transaction **Returns:** `Object` - Transaction receipt or `null` **Example:** ```json // Request { "method": "eth_getTransactionReceipt", "params": ["0x..."], "id": 1, "jsonrpc": "2.0" } // Response { "id": 1, "jsonrpc": "2.0", "result": { "transactionHash": "0x...", "blockHash": "0x0000000000000000000000000000000000000000000000000000000000000000", "blockNumber": "0x123", "transactionIndex": "0x0", "from": "0x...", "to": "0x...", "gasUsed": "0x5208", "cumulativeGasUsed": "0x5208", "effectiveGasPrice": "0x...", "status": "0x1", "contractAddress": null, "logs": [], "logsBloom": "0x..." } } ``` **Fields:** * `transactionHash`: Hash of the transaction * `blockHash`: zero hash (`0x000...000`) for preconfirmed transactions * `blockNumber`: Block number containing the transaction * `transactionIndex`: Index of transaction in block * `from`: Sender address * `to`: Recipient address * `gasUsed`: Gas used by this transaction * `cumulativeGasUsed`: Total gas used up to this transaction * `effectiveGasPrice`: Effective gas price paid * `status`: `0x1` for success, `0x0` for failure * `contractAddress`: Address of created contract (for contract creation) * `logs`: Array of log objects * `logsBloom`: Bloom filter for logs #### `eth_getBalance` Returns the balance of an account. **Parameters:** * `address`: `String` - Address to query * `blockNumber`: `String` - Block number or tag (`"pending"` for preconfirmed state) **Returns:** `String` - Account balance in wei (hex-encoded) **Example:** ```json // Request { "method": "eth_getBalance", "params": ["0x...", "pending"], "id": 1, "jsonrpc": "2.0" } // Response { "id": 1, "jsonrpc": "2.0", "result": "0x1bc16d674ec80000" } ``` #### `eth_getTransactionCount` Returns the number of transactions sent from an address (nonce). **Parameters:** * `address`: `String` - Address to query * `blockNumber`: `String` - Block number or tag (`"pending"` for preconfirmed state) **Returns:** `String` - Transaction count (hex-encoded) **Example:** ```json // Request { "method": "eth_getTransactionCount", "params": ["0x...", "pending"], "id": 1, "jsonrpc": "2.0" } // Response { "id": 1, "jsonrpc": "2.0", "result": "0x5" } ``` ### Behavior Notes #### Pending Tag Usage * When `"pending"` is used, the method queries preconfirmed state from the flashblocks cache * If no preconfirmed data is available, falls back to latest confirmed state * For non-pending queries, behaves identically to standard Ethereum JSON-RPC #### Error Handling * Returns standard JSON-RPC error responses for invalid requests * Returns `null` for non-existent transactions or blocks * Falls back to standard behavior when flashblocks are disabled or unavailable ## Derivation [g-derivation]: ../../reference/glossary.md#l2-chain-derivation [g-payload-attr]: ../../reference/glossary.md#payload-attributes [g-block]: ../../reference/glossary.md#block [g-exec-engine]: ../../reference/glossary.md#execution-engine [g-reorg]: ../../reference/glossary.md#chain-re-organization [g-receipts]: ../../reference/glossary.md#receipt [g-deposit-contract]: ../../reference/glossary.md#deposit-contract [g-deposited]: ../../reference/glossary.md#deposited-transaction [g-l1-attr-deposit]: ../../reference/glossary.md#l1-attributes-deposited-transaction [g-l1-origin]: ../../reference/glossary.md#l1-origin [g-user-deposited]: ../../reference/glossary.md#user-deposited-transaction [g-deposits]: ../../reference/glossary.md#deposits [g-sequencing]: ../../reference/glossary.md#sequencing [g-sequencer]: ../../reference/glossary.md#sequencer [g-sequencing-epoch]: ../../reference/glossary.md#sequencing-epoch [g-sequencing-window]: ../../reference/glossary.md#sequencing-window [g-sequencer-batch]: ../../reference/glossary.md#sequencer-batch [g-l2-genesis]: ../../reference/glossary.md#l2-genesis-block [g-l2-chain-inception]: ../../reference/glossary.md#l2-chain-inception [g-l2-genesis-block]: ../../reference/glossary.md#l2-genesis-block [g-batcher-transaction]: ../../reference/glossary.md#batcher-transaction [g-avail-provider]: ../../reference/glossary.md#data-availability-provider [g-batcher]: ../../reference/glossary.md#batcher [g-l2-output]: ../../reference/glossary.md#l2-output-root [g-fault-proof]: ../../reference/glossary.md#fault-proof [g-channel]: ../../reference/glossary.md#channel [g-channel-frame]: ../../reference/glossary.md#channel-frame [g-rollup-node]: ../../reference/glossary.md#rollup-node [g-block-time]: ../../reference/glossary.md#block-time [g-time-slot]: ../../reference/glossary.md#time-slot [g-consolidation]: ../../reference/glossary.md#unsafe-block-consolidation [g-safe-l2-head]: ../../reference/glossary.md#safe-l2-head [g-safe-l2-block]: ../../reference/glossary.md#safe-l2-block [g-unsafe-l2-head]: ../../reference/glossary.md#unsafe-l2-head [g-unsafe-l2-block]: ../../reference/glossary.md#unsafe-l2-block [g-unsafe-sync]: ../../reference/glossary.md#unsafe-sync [g-deposit-tx-type]: ../../reference/glossary.md#deposited-transaction-type [g-finalized-l2-head]: ../../reference/glossary.md#finalized-l2-head [g-system-config]: ../../reference/glossary.md#system-configuration ### Overview > **Note** the following assumes a single sequencer and batcher. In the future, the design will be adapted to > accommodate multiple such entities. [L2 chain derivation][g-derivation] — deriving L2 [blocks][g-block] from L1 data — is one of the main responsibilities of the [rollup node][g-rollup-node], both in validator mode, and in sequencer mode (where derivation acts as a sanity check on sequencing, and enables detecting L1 chain [re-organizations][g-reorg]). The L2 chain is derived from the L1 chain. In particular, each L1 block following [L2 chain inception][g-l2-chain-inception] is mapped to a [sequencing epoch][g-sequencing-epoch] comprising at least one L2 block. Each L2 block belongs to exactly one epoch, and we call the corresponding L1 block its [L1 origin][g-l1-origin]. The epoch's number equals that of its L1 origin block. To derive the L2 blocks of epoch number `E`, we need the following inputs: * L1 blocks in the range `[E, E + SWS)`, called the [sequencing window][g-sequencing-window] of the epoch, and `SWS` the sequencing window size. (Note that sequencing windows overlap.) * [Batcher transactions][g-batcher-transaction] from blocks in the sequencing window. * These transactions allow us to reconstruct the epoch's [sequencer batches][g-sequencer-batch], each of which will produce one L2 block. Note that: * The L1 origin will never contain any data needed to construct sequencer batches since each batch [must contain](#batch-format) the L1 origin hash. * An epoch may have no sequencer batches. * [Deposits][g-deposits] made in the L1 origin (in the form of events emitted by the [deposit contract][g-deposit-contract]). * L1 block attributes from the L1 origin (to derive the [L1 attributes deposited transaction][g-l1-attr-deposit]). * The state of the L2 chain after the last L2 block of the previous epoch, or the [L2 genesis state][g-l2-genesis] if `E` is the first epoch. To derive the whole L2 chain from scratch, we start with the [L2 genesis state][g-l2-genesis] and the [L2 genesis block][g-l2-genesis-block] as the first L2 block. We then derive L2 blocks from each epoch in order, starting at the first L1 block following [L2 chain inception][g-l2-chain-inception]. Refer to the [Architecture section][architecture] for more information on how we implement this in practice. The L2 chain may contain pre-Bedrock history, but the L2 genesis here refers to the Bedrock L2 genesis block. Each L2 `block` with origin `l1_origin` is subject to the following constraints (whose values are denominated in seconds): * `block.timestamp = prev_l2_timestamp + l2_block_time` * `prev_l2_timestamp` is the timestamp of the L2 block immediately preceding this one. If there is no preceding block, then this is the genesis block, and its timestamp is explicitly specified. * `l2_block_time` is a configurable parameter of the time between L2 blocks (2s on Optimism). * `l1_origin.timestamp <= block.timestamp <= max_l2_timestamp`, where * `max_l2_timestamp = max(l1_origin.timestamp + max_sequencer_drift, prev_l2_timestamp + l2_block_time)` * `max_sequencer_drift` is a configurable parameter that bounds how far the sequencer can get ahead of the L1. Finally, each epoch must have at least one L2 block. The first constraint means there must be an L2 block every `l2_block_time` seconds following L2 chain inception. The second constraint ensures that an L2 block timestamp never precedes its L1 origin timestamp, and is never more than `max_sequencer_drift` ahead of it, except only in the unusual case where it might prohibit an L2 block from being produced every l2\_block\_time seconds. (Such cases might arise for example under a proof-of-work L1 that sees a period of rapid L1 block production.) In either case, the sequencer enforces `len(batch.transactions) == 0` while `max_sequencer_drift` is exceeded. See [Batch Queue](#batch-queue) for more details. The final requirement that each epoch must have at least one L2 block ensures that all relevant information from the L1 (e.g. deposits) is represented in the L2, even if it has no sequencer batches. Post-merge, Ethereum has a fixed 12s [block time][g-block-time], though some slots can be skipped. Under a 2s L2 block time, we thus expect each epoch to typically contain `12/2 = 6` L2 blocks. The sequencer will however produce bigger epochs in order to maintain liveness in case of either a skipped slot on the L1 or a temporary loss of connection to it. For the lost connection case, smaller epochs might be produced after the connection was restored to keep L2 timestamps from drifting further and further ahead. ### Eager Block Derivation Deriving an L2 block requires that we have constructed its sequencer batch and derived all L2 blocks and state updates prior to it. This means we can typically derive the L2 blocks of an epoch *eagerly* without waiting on the full sequencing window. The full sequencing window is required before derivation only in the very worst case where some portion of the sequencer batch for the first block of the epoch appears in the very last L1 block of the window. Note that this only applies to *block* derivation. Sequencer batches can still be derived and tentatively queued without deriving blocks from them. ### Protocol Parameters The following table gives an overview of some protocol parameters, and how they are affected by protocol upgrades. | Parameter | Bedrock (default) value | Latest (default) value | Changes | Notes | | ------------------------------ | ----------------------- | ---------------------- | ------------------------------------------------------------------------------------------------------------------------------------ | -------------------------------------------------------- | | `max_sequencer_drift` | 600 | 1800 | [Fjord](../../upgrades/fjord/derivation.md#constant-maximum-sequencer-drift) | Changed from a chain parameter to a constant with Fjord. | | `MAX_RLP_BYTES_PER_CHANNEL` | 10,000,000 | 100,000,000 | [Fjord](../../upgrades/fjord/derivation.md#increasing-max_rlp_bytes_per_channel-and-max_channel_bank_size) | Constant increased with Fjord. | | `MAX_CHANNEL_BANK_SIZE` | 100,000,000 | 1,000,000,000 | [Fjord](../../upgrades/fjord/derivation.md#increasing-max_rlp_bytes_per_channel-and-max_channel_bank_size) | Constant increased with Fjord. | | `MAX_SPAN_BATCH_ELEMENT_COUNT` | 10,000,000 | 10,000,000 | Effectively introduced in [Fjord](../../upgrades/fjord/derivation.md#increasing-max_rlp_bytes_per_channel-and-max_channel_bank_size) | Number of elements | ### System Configuration The `SystemConfig` is an L1 contract that emits rollup configuration changes as log events. The derivation pipeline picks up these events and applies them to L2 state, ensuring every node converges on the same configuration at the same L2 block height. `SystemConfig` is the source of truth for configuration values within Base. #### System Config Updates System config updates are signaled through the `ConfigUpdate(uint256,uint8,bytes)` event. The event structure includes: * The first topic determines the version * The second topic determines the type of update * The remaining event data encodes the configuration update In version `0`, the following update types are supported: * Type `0`: `batcherHash` overwrite, as `bytes32` payload * Type `1`: Pre-Ecotone, `overhead` and `scalar` overwrite, as two packed `uint256` entries. After Ecotone upgrade, `overhead` is ignored and `scalar` is interpreted as a versioned encoding that updates `baseFeeScalar` and `blobBaseFeeScalar` * Type `2`: `gasLimit` overwrite, as `uint64` payload * Type `3`: `unsafeBlockSigner` overwrite, as `address` payload * Type `4`: `eip1559Params` overwrite, as `uint256` payload encoding denomination and elasticity * Type `5`: `operatorFeeParams` overwrite, as `uint256` payload encoding scalar and constant * Type `6`: `minBaseFee` overwrite, as `uint64` payload * Type `7`: `daFootprintGasScalar` overwrite, as `uint16` payload If a System Config Update cannot be parsed for any reason, it is not applied and is instead skipped. *** ## Batch Submission ### Sequencing & Batch Submission Overview The [sequencer][g-sequencer] accepts L2 transactions from users. It is responsible for building blocks out of these. For each such block, it also creates a corresponding [sequencer batch][g-sequencer-batch]. It is also responsible for submitting each batch to a [data availability provider][g-avail-provider] (e.g. Ethereum calldata), which it does via its [batcher][g-batcher] component. The difference between an L2 block and a batch is subtle but important: the block includes an L2 state root, whereas the batch only commits to transactions at a given L2 timestamp (equivalently: L2 block number). A block also includes a reference to the previous block (\*). (\*) This matters in some edge case where a L1 reorg would occur and a batch would be reposted to the L1 chain but not the preceding batch, whereas the predecessor of an L2 block cannot possibly change. This means that even if the sequencer applies a state transition incorrectly, the transactions in the batch will still be considered part of the canonical L2 chain. Batches are still subject to validity checks (i.e. they have to be encoded correctly), and so are individual transactions within the batch (e.g. signatures have to be valid). Invalid batches and invalid individual transactions within an otherwise valid batch are discarded by correct nodes. If the sequencer applies a state transition incorrectly and posts an [output root][g-l2-output], then this output root will be incorrect. The incorrect output root will be challenged by a [proof][g-fault-proof], then replaced by a correct output root **for the existing sequencer batches.** Refer to the [Batch Submission specification][batcher-spec] for more information. [batcher-spec]: ../batcher.md ### Batch Submission Wire Format [wire-format]: #batch-submission-wire-format Batch submission is closely tied to L2 chain derivation because the derivation process must decode the batches that have been encoded for the purpose of batch submission. The [batcher][g-batcher] submits [batcher transactions][g-batcher-transaction] to a [data availability provider][g-avail-provider]. These transactions contain one or multiple [channel frames][g-channel-frame], which are chunks of data belonging to a [channel][g-channel]. A [channel][g-channel] is a sequence of [sequencer batches][g-sequencer-batch] (for any L2 blocks) compressed together. The reason to group multiple batches together is simply to obtain a better compression rate, hence reducing data availability costs. Channels might be too large to fit in a single [batcher transaction][g-batcher-transaction], hence we need to split it into chunks known as [channel frames][g-channel-frame]. A single batcher transaction can also carry multiple frames (belonging to the same or to different channels). This design gives use the maximum flexibility in how we aggregate batches into channels, and split channels over batcher transactions. It notably allows us to maximize data utilization in a batcher transaction: for instance it allows us to pack the final (small) frame of one channel with one or more frames from the next channel. Also note that we use a streaming compression scheme, and we do not need to know how many batches a channel will end up containing when we start a channel, or even as we send the first frames in the channel. And by splitting channels across multiple data transactions, the L2 can have larger block data than the data-availability layer may support. All of this is illustrated in the following diagram. Explanations below. ![batch derivation chain diagram](/static/assets/batch-deriv-chain.svg) The first line represents L1 blocks with their numbers. The boxes under the L1 blocks represent [batcher transactions][g-batcher-transaction] included within the block. The squiggles under the L1 blocks represent [deposits][g-deposits] (more specifically, events emitted by the [deposit contract][g-deposit-contract]). Each colored chunk within the boxes represents a [channel frame][g-channel-frame]. So `A` and `B` are [channels][g-channel] whereas `A0`, `A1`, `B0`, `B1`, `B2` are frames. Notice that: * multiple channels are interleaved * frames do not need to be transmitted in order * a single batcher transaction can carry frames from multiple channels In the next line, the rounded boxes represent individual [sequencer batches][g-sequencer-batch] that were extracted from the channels. The four blue/purple/pink were derived from channel `A` while the other were derived from channel `B`. These batches are here represented in the order they were decoded from batches (in this case `B` is decoded first). > **Note** The caption here says "Channel B was seen first and will be decoded into batches first", but this is not a > requirement. For instance, it would be equally acceptable for an implementation to peek into the channels and decode > the one that contains the oldest batches first. The rest of the diagram is conceptually distinct from the first part and illustrates L2 chain derivation after the channels have been reordered. The first line shows batcher transactions. Note that in this case, there exists an ordering of the batches that makes all frames within the channels appear contiguously. This is not true in general. For instance, in the second transaction, the position of `A1` and `B0` could have been inverted for exactly the same result — no changes needed in the rest of the diagram. The second line shows the reconstructed channels in proper order. The third line shows the batches extracted from the channel. Because the channels are ordered and the batches within a channel are sequential, this means the batches are ordered too. The fourth line shows the [L2 block][g-block] derived from each batch. Note that we have a 1-1 batch to block mapping here but, as we'll see later, empty blocks that do not map to batches can be inserted in cases where there are "gaps" in the batches posted on L1. The fifth line shows the [L1 attributes deposited transaction][g-l1-attr-deposit] which, within each L2 block, records information about the L1 block that matches the L2 block's epoch. The first number denotes the epoch/L1x number, while the second number (the "sequence number") denotes the position within the epoch. Finally, the sixth line shows [user-deposited transactions][g-user-deposited] derived from the [deposit contract][g-deposit-contract] event mentioned earlier. Note the `101-0` L1 attributes transaction on the bottom right of the diagram. Its presence there is only possible if frame `B2` indicates that it is the last frame within the channel and (2) no empty blocks must be inserted. The diagram does not specify the sequencing window size in use, but from this we can infer that it must be at least 4 blocks, because the last frame of channel `A` appears in block 102, but belong to epoch 99. As for the comment on "security types", it explains the classification of blocks as used on L1 and L2. * [Unsafe L2 blocks][g-unsafe-l2-block]: * [Safe L2 blocks][g-safe-l2-block]: * Finalized L2 blocks: refer to block that have been derived from [finalized][g-finalized-l2-head] L1 data. These security levels map to the `headBlockHash`, `safeBlockHash` and `finalizedBlockHash` values transmitted when interacting with the [execution-engine API][exec-engine]. #### Batcher Transaction Format Batcher transactions are encoded as `version_byte ++ rollup_payload` (where `++` denotes concatenation). | `version_byte` | `rollup_payload` | | -------------- | ------------------------------------------------------------------ | | 0 | `frame ...` (one or more frames, concatenated) | | 1 | `da_commitment` (experimental data-availability commitment format) | Unknown versions make the batcher transaction invalid (it must be ignored by the rollup node). All frames in a batcher transaction must be parseable. If any one frame fails to parse, the all frames in the transaction are rejected. Batch transactions are authenticated by verifying that the `to` address of the transaction matches the batch inbox address, and the `from` address matches the batch-sender address in the [system configuration][g-system-config] at the time of the L1 block that the transaction data is read from. #### Frame Format A [channel frame][g-channel-frame] is encoded as: ```text frame = channel_id ++ frame_number ++ frame_data_length ++ frame_data ++ is_last channel_id = bytes16 frame_number = uint16 frame_data_length = uint32 frame_data = bytes is_last = bool ``` Where `uint32` and `uint16` are all big-endian unsigned integers. Type names should be interpreted to and encoded according to [the Solidity ABI][solidity-abi]. [solidity-abi]: https://docs.soliditylang.org/en/v0.8.16/abi-spec.html All data in a frame is fixed-size, except the `frame_data`. The fixed overhead is `16 + 2 + 4 + 1 = 23 bytes`. Fixed-size frame metadata avoids a circular dependency with the target total data length, to simplify packing of frames with varying content length. where: * `channel_id` is an opaque identifier for the channel. It should not be reused and is suggested to be random; however, outside of timeout rules, it is not checked for validity * `frame_number` identifies the index of the frame within the channel * `frame_data_length` is the length of `frame_data` in bytes. It is capped to 1,000,000 bytes. * `frame_data` is a sequence of bytes belonging to the channel, logically after the bytes from the previous frames * `is_last` is a single byte with a value of 1 if the frame is the last in the channel, 0 if there are frames in the channel. Any other value makes the frame invalid (it must be ignored by the rollup node). #### Channel Format [channel-format]: #channel-format A channel is encoded by applying a streaming compression algorithm to a list of batches: ```text encoded_batches = [] for batch in batches: encoded_batches ++ batch.encode() rlp_batches = rlp_encode(encoded_batches) ``` where: * `batches` is the input, a sequence of batches each with a byte-encoder function `.encode()` as per the next section ("Batch Encoding") * `encoded_batches` is a byte array: the concatenation of the encoded batches * `rlp_batches` is the rlp encoding of the concatenated encoded batches ```text channel_encoding = zlib_compress(rlp_batches) ``` where zlib\_compress is the ZLIB algorithm (as specified in [RFC-1950][rfc1950]) with no dictionary. [rfc1950]: https://www.rfc-editor.org/rfc/rfc1950.html The Fjord upgrade introduces an additional [versioned channel encoding format](../../upgrades/fjord/derivation.md#brotli-channel-compression) to support alternate compression algorithms. When decompressing a channel, we limit the amount of decompressed data to `MAX_RLP_BYTES_PER_CHANNEL` (defined in the [Protocol Parameters table](#protocol-parameters)), in order to avoid "zip-bomb" types of attack (where a small compressed input decompresses to a humongous amount of data). If the decompressed data exceeds the limit, things proceeds as though the channel contained only the first `MAX_RLP_BYTES_PER_CHANNEL` decompressed bytes. The limit is set on RLP decoding, so all batches that can be decoded in `MAX_RLP_BYTES_PER_CHANNEL` will be accepted even if the size of the channel is greater than `MAX_RLP_BYTES_PER_CHANNEL`. The exact requirement is that `length(input) <= MAX_RLP_BYTES_PER_CHANNEL`. While the above pseudocode implies that all batches are known in advance, it is possible to perform streaming compression and decompression of RLP-encoded batches. This means it is possible to start including channel frames in a [batcher transaction][g-batcher-transaction] before we know how many batches (and how many frames) the channel will contain. #### Batch Format [batch-format]: #batch-format Recall that a batch contains a list of transactions to be included in a specific L2 block. A batch is encoded as `batch_version ++ content`, where `content` depends on the `batch_version`. Prior to the Delta upgrade, batches all have batch\_version 0 and are encoded as described below. | `batch_version` | `content` | | --------------- | ---------------------------------------------------------------------------------- | | 0 | `rlp_encode([parent_hash, epoch_number, epoch_hash, timestamp, transaction_list])` | where: * `batch_version` is a single byte, prefixed before the RLP contents, alike to transaction typing. * `rlp_encode` is a function that encodes a batch according to the [RLP format], and `[x, y, z]` denotes a list containing items `x`, `y` and `z` * `parent_hash` is the block hash of the previous L2 block * `epoch_number` and `epoch_hash` are the number and hash of the L1 block corresponding to the [sequencing epoch][g-sequencing-epoch] of the L2 block * `timestamp` is the timestamp of the L2 block * `transaction_list` is an RLP-encoded list of [EIP-2718] encoded transactions. [RLP format]: https://ethereum.org/en/developers/docs/data-structures-and-encoding/rlp/ [EIP-2718]: https://eips.ethereum.org/EIPS/eip-2718 The Delta upgrade introduced an additional batch type, [span batches][span-batches]. [span-batches]: ../../upgrades/delta/span-batches.md Unknown versions make the batch invalid (it must be ignored by the rollup node), as do malformed contents. > **Note** if the batch version and contents can be RLP decoded correctly but extra content exists beyond the batch, > the additional data may be ignored during parsing. Data *between* RLP encoded batches may not be ignored > (as they are seen as malformed batches), but if a batch can be fully described by the RLP decoding, > extra content does not invalidate the decoded batch. The `epoch_number` and the `timestamp` must also respect the constraints listed in the [Batch Queue][batch-queue] section, otherwise the batch is considered invalid and will be ignored. *** ## Architecture [architecture]: #architecture The above primarily describes the general encodings used in L2 chain derivation, primarily how batches are encoded within [batcher transactions][g-batcher-transaction]. This section describes how the L2 chain is produced from the L1 batches using a pipeline architecture. A verifier may implement this differently, but must be semantically equivalent to not diverge from the L2 chain. ### L2 Chain Derivation Pipeline Our architecture decomposes the derivation process into a pipeline made up of the following stages: 1. L1 Traversal 2. L1 Retrieval 3. Frame Queue 4. Channel Bank 5. Channel Reader (Batch Decoding) 6. Batch Queue 7. Payload Attributes Derivation 8. Engine Queue The data flows from the start (outer) of the pipeline towards the end (inner). From the innermost stage the data is pulled from the outermost stage. However, data is *processed* in reverse order. Meaning that if there is any data to be processed in the last stage, it will be processed first. Processing proceeds in "steps" that can be taken at each stage. We try to take as many steps as possible in the last (most inner) stage before taking any steps in its outer stage, etc. This ensures that we use the data we already have before pulling more data and minimizes the latency of data traversing the derivation pipeline. Each stage can maintain its own inner state as necessary. In particular, each stage maintains a L1 block reference (number + hash) to the latest L1 block such that all data originating from previous blocks has been fully processed, and the data from that block is being or has been processed. This allows the innermost stage to account for finalization of the L1 data-availability used to produce the L2 chain, to reflect in the L2 chain forkchoice when the L2 chain inputs become irreversible. Let's briefly describe each stage of the pipeline. #### L1 Traversal In the *L1 Traversal* stage, we simply read the header of the next L1 block. In normal operations, these will be new L1 blocks as they get created, though we can also read old blocks while syncing, or in case of an L1 [re-org][g-reorg]. Upon traversal of the L1 block, the [system configuration][g-system-config] copy used by the L1 retrieval stage is updated, such that the batch-sender authentication is always accurate to the exact L1 block that is read by the stage. #### L1 Retrieval In the *L1 Retrieval* stage, we read the block we get from the outer stage (L1 traversal), and extract data from its [batcher transactions][g-batcher-transaction]. A batcher transaction is one with the following properties: * The [`to`] field is equal to the configured batcher inbox address. * The transaction type is one of `0`, `1`, `2`, `3`, or `0x7e` (L2 [Deposited transaction type][g-deposit-tx-type], to support force-inclusion of batcher transactions on Base). * The sender, as recovered from the transaction signature (`v`, `r`, and `s`), is the batcher address loaded from the system config matching the L1 block of the data. Each batcher transaction is versioned and contains a series of [channel frames][g-channel-frame] to be read by the Frame Queue, see [Batch Submission Wire Format][wire-format]. Each batcher transaction in the block is processed in the order they appear in the block by passing its calldata on to the next phase. [`to`]: https://github.com/ethereum/execution-specs/blob/3fe6514f2d9d234e760d11af883a47c1263eff51/src/ethereum/frontier/fork_types.py#L52C31-L52C31 #### Frame Queue The Frame Queue buffers one data-transaction at a time, decoded into [channel frames][g-channel-frame], to be consumed by the next stage. See [Batcher transaction format](#batcher-transaction-format) and [Frame format](#frame-format) specifications. #### Channel Bank The *Channel Bank* stage is responsible for managing buffering from the channel bank that was written to by the L1 retrieval stage. A step in the channel bank stage tries to read data from channels that are "ready". Channels are currently fully buffered until read or dropped, streaming channels may be supported in a future version of the ChannelBank. To bound resource usage, the Channel Bank prunes based on channel size, and times out old channels. Channels are recorded in FIFO order in a structure called the *channel queue*. A channel is added to the channel queue the first time a frame belonging to the channel is seen. ##### Pruning After successfully inserting a new frame, the ChannelBank is pruned: channels are dropped in FIFO order, until `total_size <= MAX_CHANNEL_BANK_SIZE`, where: * `total_size` is the sum of the sizes of each channel, which is the sum of all buffered frame data of the channel, with an additional frame-overhead of `200` bytes per frame. * `MAX_CHANNEL_BANK_SIZE` is a protocol constant defined in the [Protocol Parameters table](#protocol-parameters). ##### Timeouts The L1 origin that the channel was opened in is tracked with the channel as `channel.open_l1_block`, and determines the maximum span of L1 blocks that the channel data is retained for, before being pruned. A channel is timed out if: `current_l1_block.number > channel.open_l1_block.number + CHANNEL_TIMEOUT`, where: * `current_l1_block` is the L1 origin that the stage is currently traversing. * `CHANNEL_TIMEOUT` is a rollup-configurable, expressed in number of L1 blocks. New frames for timed-out channels are dropped instead of buffered. ##### Reading Upon reading, while the first opened channel is timed-out, remove it from the channel-bank. Prior to the Canyon network upgrade, once the first opened channel, if any, is not timed-out and is ready, then it is read and removed from the channel-bank. After the Canyon network upgrade, the entire channel bank is scanned in FIFO order (by open time) & the first ready (i.e. not timed-out) channel will be returned. The canyon behavior will activate when frames from a L1 block whose timestamp is greater than or equal to the canyon time first enter the channel queue. A channel is ready if: * The channel is closed * The channel has a contiguous sequence of frames until the closing frame If no channel is ready, the next frame is read and ingested into the channel bank. ##### Loading frames When a channel ID referenced by a frame is not already present in the Channel Bank, a new channel is opened, tagged with the current L1 block, and appended to the channel-queue. Frame insertion conditions: * New frames matching timed-out channels that have not yet been pruned from the channel-bank are dropped. * Duplicate frames (by frame number) for frames that have not been pruned from the channel-bank are dropped. * Duplicate closes (new frame `is_last == 1`, but the channel has already seen a closing frame and has not yet been pruned from the channel-bank) are dropped. If a frame is closing (`is_last == 1`) any existing higher-numbered frames are removed from the channel. Note that while this allows channel IDs to be reused once they have been pruned from the channel-bank, it is recommended that batcher implementations use unique channel IDs. #### Channel Reader (Batch Decoding) In this stage, we decompress the channel we pull from the last stage, and then parse [batches][g-sequencer-batch] from the decompressed byte stream. See [Channel Format][channel-format] and [Batch Format][batch-format] for decompression and decoding specification. #### Batch Queue [batch-queue]: #batch-queue During the *Batch Buffering* stage, we reorder batches by their timestamps. If batches are missing for some [time slots][g-time-slot] and a valid batch with a higher timestamp exists, this stage also generates empty batches to fill the gaps. Batches are pushed to the next stage whenever there is one sequential batch directly following the timestamp of the current [safe L2 head][g-safe-l2-head] (the last block that can be derived from the canonical L1 chain). The parent hash of the batch must also match the hash of the current safe L2 head. Note that the presence of any gaps in the batches derived from L1 means that this stage will need to buffer for a whole [sequencing window][g-sequencing-window] before it can generate empty batches (because the missing batch(es) could have data in the last L1 block of the window in the worst case). A batch can have 4 different forms of validity: * `drop`: the batch is invalid, and will always be in the future, unless we reorg. It can be removed from the buffer. * `accept`: the batch is valid and should be processed. * `undecided`: we are lacking L1 information until we can proceed batch filtering. * `future`: the batch may be valid, but cannot be processed yet and should be checked again later. The batches are processed in order of the inclusion on L1: if multiple batches can be `accept`-ed the first is applied. An implementation can defer `future` batches a later derivation step to reduce validation work. The batches validity is derived as follows: Definitions: * `batch` as defined in the [Batch format section][batch-format]. * `epoch = safe_l2_head.l1_origin` a [L1 origin][g-l1-origin] coupled to the batch, with properties: `number` (L1 block number), `hash` (L1 block hash), and `timestamp` (L1 block timestamp). * `inclusion_block_number` is the L1 block number when `batch` was first *fully* derived, i.e. decoded and output by the previous stage. * `next_timestamp = safe_l2_head.timestamp + block_time` is the expected L2 timestamp the next batch should have, see [block time information][g-block-time]. * `next_epoch` may not be known yet, but would be the L1 block after `epoch` if available. * `batch_origin` is either `epoch` or `next_epoch`, depending on validation. Note that processing of a batch can be deferred until `batch.timestamp <= next_timestamp`, since `future` batches will have to be retained anyway. Rules, in validation order: * `batch.timestamp > next_timestamp` -> `future`: i.e. the batch must be ready to process. * `batch.timestamp < next_timestamp` -> `drop`: i.e. the batch must not be too old. * `batch.parent_hash != safe_l2_head.hash` -> `drop`: i.e. the parent hash must be equal to the L2 safe head block hash. * `batch.epoch_num + sequence_window_size < inclusion_block_number` -> `drop`: i.e. the batch must be included timely. * `batch.epoch_num < epoch.number` -> `drop`: i.e. the batch origin is not older than that of the L2 safe head. * `batch.epoch_num == epoch.number`: define `batch_origin` as `epoch`. * `batch.epoch_num == epoch.number+1`: * If `next_epoch` is not known -> `undecided`: i.e. a batch that changes the L1 origin cannot be processed until we have the L1 origin data. * If known, then define `batch_origin` as `next_epoch` * `batch.epoch_num > epoch.number+1` -> `drop`: i.e. the L1 origin cannot change by more than one L1 block per L2 block. * `batch.epoch_hash != batch_origin.hash` -> `drop`: i.e. a batch must reference a canonical L1 origin, to prevent batches from being replayed onto unexpected L1 chains. * `batch.timestamp < batch_origin.time` -> `drop`: enforce the min L2 timestamp rule. * `batch.timestamp > batch_origin.time + max_sequencer_drift`: enforce the L2 timestamp drift rule, but with exceptions to preserve above min L2 timestamp invariant: * `len(batch.transactions) == 0`: * `epoch.number == batch.epoch_num`: this implies the batch does not already advance the L1 origin, and must thus be checked against `next_epoch`. * If `next_epoch` is not known -> `undecided`: without the next L1 origin we cannot yet determine if time invariant could have been kept. * If `batch.timestamp >= next_epoch.time` -> `drop`: the batch could have adopted the next L1 origin without breaking the `L2 time >= L1 time` invariant. * `len(batch.transactions) > 0`: -> `drop`: when exceeding the sequencer time drift, never allow the sequencer to include transactions. * `batch.transactions`: `drop` if the `batch.transactions` list contains a transaction that is invalid or derived by other means exclusively: * any transaction that is empty (zero length byte string) * any [deposited transactions][g-deposit-tx-type] (identified by the transaction type prefix byte) * any transaction of a future type > 2 (note that [Isthmus adds support](../../upgrades/isthmus/derivation.md#activation) for `SetCode` transactions of type 4) If no batch can be `accept`-ed, and the stage has completed buffering of all batches that can fully be read from the L1 block at height `epoch.number + sequence_window_size`, and the `next_epoch` is available, then an empty batch can be derived with the following properties: * `parent_hash = safe_l2_head.hash` * `timestamp = next_timestamp` * `transactions` is empty, i.e. no sequencer transactions. Deposited transactions may be added in the next stage. * If `next_timestamp < next_epoch.time`: the current L1 origin is repeated, to preserve the L2 time invariant. * `epoch_num = epoch.number` * `epoch_hash = epoch.hash` * If the batch is the first batch of the epoch, that epoch is used instead of advancing the epoch to ensure that there is at least one L2 block per epoch. * `epoch_num = epoch.number` * `epoch_hash = epoch.hash` * Otherwise, * `epoch_num = next_epoch.number` * `epoch_hash = next_epoch.hash` #### Payload Attributes Derivation In the *Payload Attributes Derivation* stage, we convert the batches we get from the previous stage into instances of the [`PayloadAttributes`][g-payload-attr] structure. Such a structure encodes the transactions that need to figure into a block, as well as other block inputs (timestamp, fee recipient, etc). Payload attributes derivation is detailed in the section [Deriving Payload Attributes section][deriving-payload-attr] below. This stage maintains its own copy of the [system configuration][g-system-config], independent of the L1 retrieval stage. The system configuration is updated with L1 log events whenever the L1 epoch referenced by the batch input changes. #### Engine Queue In the *Engine Queue* stage, the previously derived `PayloadAttributes` structures are buffered and sent to the [execution engine][g-exec-engine] to be executed and converted into a proper L2 block. The stage maintains references to three L2 blocks: * The [finalized L2 head][g-finalized-l2-head]: everything up to and including this block can be fully derived from the [finalized][l1-finality] (i.e. canonical and forever irreversible) part of the L1 chain. * The [safe L2 head][g-safe-l2-head]: everything up to and including this block can be fully derived from the currently canonical L1 chain. * The [unsafe L2 head][g-unsafe-l2-head]: blocks between the safe and unsafe heads are [unsafe blocks][g-unsafe-l2-block] that have not been derived from L1. These blocks either come from sequencing (in sequencer mode) or from [unsafe sync][g-unsafe-sync] to the sequencer (in validator mode). This is also known as the "latest" head. Additionally, it buffers a short history of references to recently processed safe L2 blocks, along with references from which L1 blocks each was derived. This history does not have to be complete, but enables later L1 finality signals to be translated into L2 finality. ##### Engine API usage To interact with the engine, the [execution engine API][exec-engine] is used, with the following JSON-RPC methods: [exec-engine]: ../execution/index.md ##### Bedrock, Canyon, Delta: API Usage * [`engine_forkchoiceUpdatedV2`] — updates the forkchoice (i.e. the chain head) to `headBlockHash` if different, and instructs the engine to start building an execution payload if the payload attributes parameter is not `null`. * [`engine_getPayloadV2`] — retrieves a previously requested execution payload build. * [`engine_newPayloadV2`] — executes an execution payload to create a block. ##### Ecotone: API Usage * [`engine_forkchoiceUpdatedV3`] — updates the forkchoice (i.e. the chain head) to `headBlockHash` if different, and instructs the engine to start building an execution payload if the payload attributes parameter is not `null`. * [`engine_getPayloadV3`] — retrieves a previously requested execution payload build. * `engine_newPayload` * [`engine_newPayloadV2`] — executes a Bedrock/Canyon/Delta execution payload to create a block. * [`engine_newPayloadV3`] — executes an Ecotone execution payload to create a block. * [`engine_newPayloadV4`] - executes an Isthmus execution payload to create a block. The current version of `op-node` uses the `v4` Engine API RPC methods as well as `engine_newPayloadV3` and `engine_newPayloadV2`, due to `engine_newPayloadV4` only supporting Isthmus execution payloads. Both `engine_forkchoiceUpdatedV4` and `engine_getPayloadV4` are backwards compatible with Ecotone, Bedrock, Canyon & Delta payloads. Prior versions of `op-node` used `v3`, `v2` and `v1` methods. [`engine_forkchoiceUpdatedV2`]: ../execution/index.md#engine_forkchoiceupdatedv2 [`engine_forkchoiceUpdatedV3`]: ../execution/index.md#engine_forkchoiceupdatedv3 [`engine_getPayloadV2`]: ../execution/index.md#engine_getpayloadv2 [`engine_getPayloadV3`]: ../execution/index.md#engine_getpayloadv3 [`engine_newPayloadV2`]: ../execution/index.md#engine_newpayloadv2 [`engine_newPayloadV3`]: ../execution/index.md#engine_newpayloadv3 [`engine_newPayloadV4`]: ../execution/index.md#engine_newpayloadv4 The execution payload is an object of type [`ExecutionPayloadV3`][eth-payload]. [eth-payload]: https://github.com/ethereum/execution-apis/blob/main/src/engine/cancun.md The `ExecutionPayload` has the following requirements: * Bedrock * The withdrawals field MUST be nil * The blob gas used field MUST be nil * The blob gas limit field MUST be nil * Canyon, Delta * The withdrawals field MUST be non-nil * The withdrawals field MUST be an empty list * The blob gas used field MUST be nil * The blob gas limit field MUST be nil * Ecotone * The withdrawals field MUST be non-nil * The withdrawals field MUST be an empty list * The blob gas used field MUST be 0 * The blob gas limit field MUST be 0 ##### Forkchoice synchronization If there are any forkchoice updates to be applied, before additional inputs are derived or processed, then these are applied to the engine first. This synchronization may happen when: * A L1 finality signal finalizes one or more L2 blocks: updating the "finalized" L2 block. * A successful consolidation of unsafe L2 blocks: updating the "safe" L2 block. * The first thing after a derivation pipeline reset, to ensure a consistent execution engine forkchoice state. The new forkchoice state is applied by calling [fork choice updated](#engine-api-usage) on the engine API. On forkchoice-state validity errors the derivation pipeline must be reset to recover to consistent state. ##### L1-consolidation: payload attributes matching If the unsafe head is ahead of the safe head, then [consolidation][g-consolidation] is attempted, verifying that existing unsafe L2 chain matches the derived L2 inputs as derived from the canonical L1 data. During consolidation, we consider the oldest unsafe L2 block, i.e. the unsafe L2 block directly after the safe head. If the payload attributes match this oldest unsafe L2 block, then that block can be considered "safe" and becomes the new safe head. The following fields of the derived L2 payload attributes are checked for equality with the L2 block: * Bedrock, Canyon, Delta, Ecotone Blocks * `parent_hash` * `timestamp` * `randao` * `fee_recipient` * `transactions_list` (first length, then equality of each of the encoded transactions, including deposits) * `gas_limit` * Canyon, Delta, Ecotone Blocks * `withdrawals` (first presence, then length, then equality of each of the encoded withdrawals) * Ecotone Blocks * `parent_beacon_block_root` If consolidation succeeds, the forkchoice change will synchronize as described in the section above. If consolidation fails, the L2 payload attributes will be processed immediately as described in the section below. The payload attributes are chosen in favor of the previous unsafe L2 block, creating an L2 chain reorg on top of the current safe block. Immediately processing the new alternative attributes enables execution engines like go-ethereum to enact the change, as linear rewinds of the tip of the chain may not be supported. ##### L1-sync: payload attributes processing [exec-engine-comm]: ../execution/index.md#engine-api If the safe and unsafe L2 heads are identical (whether because of failed consolidation or not), we send the L2 payload attributes to the execution engine to be constructed into a proper L2 block. This L2 block will then become both the new L2 safe and unsafe head. If a payload attributes created from a batch cannot be inserted into the chain because of a validation error (i.e. there was an invalid transaction or state transition in the block) the batch should be dropped & the safe head should not be advanced. The engine queue will attempt to use the next batch for that timestamp from the batch queue. If no valid batch is found, the rollup node will create a deposit only batch which should always pass validation because deposits are always valid. Interaction with the execution engine via the execution engine API is detailed in the [Communication with the Execution Engine][exec-engine-comm] section. The payload attributes are then processed with a sequence of: * [Engine: Fork choice updated](#engine-api-usage) with current forkchoice state of the stage, and the attributes to start block building. * Non-deterministic sources, like the tx-pool, must be disabled to reconstruct the expected block. * [Engine: Get Payload](#engine-api-usage) to retrieve the payload, by the payload-ID in the result of the previous step. * [Engine: New Payload](#engine-api-usage) to import the new payload into the execution engine. * [Engine: Fork Choice Updated](#engine-api-usage) to make the new payload canonical, now with a change of both `safe` and `unsafe` fields to refer to the payload, and no payload attributes. Engine API Error handling: * On RPC-type errors the payload attributes processing should be re-attempted in a future step. * On payload processing errors the attributes must be dropped, and the forkchoice state must be left unchanged. * Eventually the derivation pipeline will produce alternative payload attributes, with or without batches. * If the payload attributes only contained deposits, then it is a critical derivation error if these are invalid. * On forkchoice-state validity errors the derivation pipeline must be reset to recover to consistent state. ##### Processing unsafe payload attributes If no forkchoice updates or L1 data remain to be processed, and if the next possible L2 block is already available through an unsafe source such as the sequencer publishing it via the p2p network, then it is optimistically processed as an "unsafe" block. This reduces later derivation work to just consolidation with L1 in the happy case, and enables the user to see the head of the L2 chain faster than the L1 may confirm the L2 batches. To process unsafe payloads, the payload must: * Have a block number higher than the current safe L2 head. * The safe L2 head may only be reorged out due to L1 reorgs. * Have a parent blockhash that matches the current unsafe L2 head. * This prevents the execution engine individually syncing a larger gap in the unsafe L2 chain. * This prevents unsafe L2 blocks from reorging other previously validated L2 blocks. * This check may change in the future versions to adopt e.g. the L1 snap-sync protocol. The payload is then processed with a sequence of: * Bedrock/Canyon/Delta Payloads * `engine_newPayloadV2`: process the payload. It does not become canonical yet. * `engine_forkchoiceUpdatedV2`: make the payload the canonical unsafe L2 head, and keep the safe/finalized L2 heads. * Ecotone Payloads * `engine_newPayloadV3`: process the payload. It does not become canonical yet. * `engine_forkchoiceUpdatedV3`: make the payload the canonical unsafe L2 head, and keep the safe/finalized L2 heads. * Isthmus Payloads * `engine_newPayloadV4`: process the payload. It does not become canonical yet. Engine API Error handling: * On RPC-type errors the payload processing should be re-attempted in a future step. * On payload processing errors the payload must be dropped, and not be marked as canonical. * On forkchoice-state validity errors the derivation pipeline must be reset to recover to consistent state. #### Resetting the Pipeline It is possible to reset the pipeline, for instance if we detect an L1 [reorg (reorganization)][g-reorg]. **This enables the rollup node to handle L1 chain reorg events.** Resetting will recover the pipeline into a state that produces the same outputs as a full L2 derivation process, but starting from an existing L2 chain that is traversed back just enough to reconcile with the current L1 chain. Note that this algorithm covers several important use-cases: * Initialize the pipeline without starting from 0, e.g. when the rollup node restarts with an existing engine instance. * Recover the pipeline if it becomes inconsistent with the execution engine chain, e.g. when the engine syncs/changes. * Recover the pipeline when the L1 chain reorganizes, e.g. a late L1 block is orphaned, or a larger attestation failure. * Initialize the pipeline to derive a disputed L2 block with prior L1 and L2 history inside a proof program. Handling these cases also means a node can be configured to eagerly sync L1 data with 0 confirmations, as it can undo the changes if the L1 later does recognize the data as canonical, enabling safe low-latency usage. The Engine Queue is first reset, to determine the L1 and L2 starting points to continue derivation from. After this, the other stages are reset independent of each other. ##### Finding the sync starting point To find the starting point, there are several steps, relative to the head of the chain traversing back: 1. Find the current L2 forkchoice state * If no `finalized` block can be found, start at the Bedrock genesis block. * If no `safe` block can be found, fallback to the `finalized` block. * The `unsafe` block should always be available and consistent with the above (it may not be in rare engine-corruption recovery cases, this is being reviewed). 2. Find the first L2 block with plausible L1 reference to be the new `unsafe` starting point, starting from previous `unsafe`, back to `finalized` and no further. * Plausible iff: the L1 origin of the L2 block is known and canonical, or unknown and has a block-number ahead of L1. 3. Find the first L2 block with an L1 reference older than the sequencing window, to be the new `safe` starting point, starting at the above plausible `unsafe` head, back to `finalized` and no further. * If at any point the L1 origin is known but not canonical, the `unsafe` head is revised to parent of the current. * The highest L2 block with known canonical L1 origin is remembered as `highest`. * If at any point the L1 origin in the block is corrupt w\.r.t. derivation rules, then error. Corruption includes: * Inconsistent L1 origin block number or parent-hash with parent L1 origin * Inconsistent L1 sequence number (always changes to `0` for a L1 origin change, or increments by `1` if not) * If the L1 origin of the L2 block `n` is older than the L1 origin of `highest` by more than a sequence window, and `n.sequence_number == 0`, then the parent L2 block of `n` will be the `safe` starting point. 4. The `finalized` L2 block persists as the `finalized` starting point. 5. Find the first L2 block with an L1 reference older than the channel-timeout * The L1 origin referenced by this block which we call `l2base` will be the `base` for the L2 pipeline derivation: By starting here, the stages can buffer any necessary data, while dropping incomplete derivation outputs until L1 traversal has caught up with the actual L2 safe head. While traversing back the L2 chain, an implementation may sanity-check that the starting point is never set too far back compared to the existing forkchoice state, to avoid an intensive reorg because of misconfiguration. Implementers note: step 1-4 are known as `FindL2Heads`. Step 5 is currently part of the Engine Queue reset. This may change to isolate the starting-point search from the bare reset logic. ##### Resetting derivation stages 1. L1 Traversal: start at L1 `base` as first block to be pulled by next stage. 2. L1 Retrieval: empty previous data, and fetch the `base` L1 data, or defer the fetching work to a later pipeline step. 3. Frame Queue: empty the queue. 4. Channel Bank: empty the channel bank. 5. Channel Reader: reset any batch decoding state. 6. Batch Queue: empty the batch queue, use `base` as initial L1 point of reference. 7. Payload Attributes Derivation: empty any batch/attributes state. 8. Engine Queue: * Initialize L2 forkchoice state with syncing start point state. (`finalized`/`safe`/`unsafe`) * Initialize the L1 point of reference of the stage to `base`. * Require a forkchoice update as first task * Reset any finality data Where necessary, stages starting at `base` can initialize their system-config from data encoded in the `l2base` block. ##### About reorgs Post-Merge Note that post-[merge], the depth of reorgs will be bounded by the [L1 finality delay][l1-finality] (2 L1 beacon epochs, or approximately 13 minutes, unless more than 1/3 of the network consistently disagrees). New L1 blocks may be finalized every L1 beacon epoch (approximately 6.4 minutes), and depending on these finality-signals and batch-inclusion, the derived L2 chain will become irreversible as well. Note that this form of finalization only affects inputs, and nodes can then subjectively say the chain is irreversible, by reproducing the chain from these irreversible inputs and the set protocol rules and parameters. This is however completely unrelated to the outputs posted on L1, which require a form of proof like a fault-proof or zk-proof to finalize. Optimistic-rollup outputs like withdrawals on L1 are only labeled "finalized" after passing a week without dispute (fault proof challenge window), a name-collision with the proof-of-stake finalization. [merge]: https://ethereum.org/en/upgrades/merge/ [l1-finality]: https://ethereum.org/en/developers/docs/consensus-mechanisms/pos/#finality *** ## Deriving Payload Attributes [deriving-payload-attr]: #deriving-payload-attributes For every L2 block derived from L1 data, we need to build [payload attributes][g-payload-attr], represented by an [expanded version][expanded-payload] of the [`PayloadAttributesV2`][eth-payload] object, which includes additional `transactions` and `noTxPool` fields. This process happens during the payloads-attributes queue ran by a verifier node, as well as during block-production ran by a sequencer node (the sequencer may enable the tx-pool usage if the transactions are batch-submitted). [expanded-payload]: ../execution/index.md#extended-payloadattributesv1 ### Deriving the Transaction List For each L2 block to be created by the sequencer, we start from a [sequencer batch][g-sequencer-batch] matching the target L2 block number. This could potentially be an empty auto-generated batch, if the L1 chain did not include a batch for the target L2 block number. [Remember][batch-format] that the batch includes a [sequencing epoch][g-sequencing-epoch] number, an L2 timestamp, and a transaction list. This block is part of a [sequencing epoch][g-sequencing-epoch], whose number matches that of an L1 block (its *[L1 origin][g-l1-origin]*). This L1 block is used to derive L1 attributes and (for the first L2 block in the epoch) user deposits. Therefore, a [`PayloadAttributesV2`][expanded-payload] object must include the following transactions: * one or more [deposited transactions][g-deposited], of two kinds: * a single *[L1 attributes deposited transaction][g-l1-attr-deposit]*, derived from the L1 origin. * for the first L2 block in the epoch, zero or more *[user-deposited transactions][g-user-deposited]*, derived from the [receipts][g-receipts] of the L1 origin. * zero or more [network upgrade automation transactions]: special transactions to perform network upgrades. * zero or more *[sequenced transactions][g-sequencing]*: regular transactions signed by L2 users, included in the sequencer batch. Transactions **must** appear in this order in the payload attributes. The L1 attributes are read from the L1 block header, while deposits are read from the L1 block's [receipts][g-receipts]. Refer to the [**deposit contract specification**][deposit-contract-spec] for details on how deposits are encoded as log entries. [deposit-contract-spec]: ../bridging/deposits.md#deposit-contract Logs are derived from transactions following the future-proof best-effort process described in [On Future Proof Transaction Log Derivation](#on-future-proof-transaction-log-derivation) #### Network upgrade automation transactions [network upgrade automation transactions]: #network-upgrade-automation-transactions Some network upgrades require automated contract changes or deployments at specific blocks. To automate these, without adding persistent changes to the execution-layer, special transactions may be inserted as part of the derivation process. ### Building Individual Payload Attributes After deriving the transactions list, the rollup node constructs a [`PayloadAttributesV2`][extended-attributes] as follows: * `timestamp` is set to the batch's timestamp. * `random` is set to the `prev_randao` L1 block attribute. * `suggestedFeeRecipient` is set to the Sequencer Fee Vault address. See [Fee Vaults] specification. * `transactions` is the array of the derived transactions: deposited transactions and sequenced transactions, all encoded with [EIP-2718]. * `noTxPool` is set to `true`, to use the exact above `transactions` list when constructing the block. * `gasLimit` is set to the current `gasLimit` value in the [system configuration][g-system-config] of this payload. * `withdrawals` is set to nil prior to Canyon and an empty array after Canyon [extended-attributes]: ../execution/index.md#extended-payloadattributesv1 [Fee Vaults]: ../execution/index.md#fee-vaults ### On Future-Proof Transaction Log Derivation As described in [L1 Retrieval](#l1-retrieval), batcher transactions' types are required to be from a fixed allow-list. However, we want to allow deposit transactions and `SystemConfig` update events to get derived even from receipts of future transaction types, as long as the receipts can be decoded following a best-effort process: As long as a future transaction type follows the [EIP-2718](https://eips.ethereum.org/EIPS/eip-2718) specification, the type can be decoded from the first byte of the transaction's (or its receipt's) binary encoding. We can then proceed as follows to get the logs of such a future transaction, or discard the transaction's receipt as invalid. * If it's a known transaction type, that is, legacy (first byte of the encoding is in the range `[0xc0, 0xfe]`) or its first byte is in the range `[0, 4]` or `0x7e` (*deposited*), then it's not a *future transaction* and we know how to decode the receipt and this process is irrelevant. * If a transaction's first byte is in the range `[0x05, 0x7d]`, it is expected to be a *future* EIP-2718 transaction, so we can proceed to the receipt. Note that we excluded `0x7e` because that's the deposit transaction type, which is known. * The *future* receipt encoding's first byte must be the same byte as the transaction encoding's first byte, or it is discarded as invalid, because we require it to be an EIP-2718-encoded receipt to continue. * The receipt payload is decoded as if it is encoded as `rlp([status, cumulative_transaction_gas_used, logs_bloom, logs])`, which is the encoding of the known non-legacy transaction types. * If this decoding fails, the transaction's receipt is discarded as invalid. * If this decoding succeeds, the `logs` have been obtained and can be processed as those of known transaction types. The intention of this best-effort decoding process is to future-proof the protocol for new L1 transaction types. ## Specification [g-rollup-node]: ../../reference/glossary.md#rollup-node [g-derivation]: ../../reference/glossary.md#L2-chain-derivation [g-payload-attr]: ../../reference/glossary.md#payload-attributes [g-block]: ../../reference/glossary.md#block [g-exec-engine]: ../../reference/glossary.md#execution-engine [g-reorg]: ../../reference/glossary.md#re-organization [g-rollup-driver]: ../../reference/glossary.md#rollup-driver [g-receipts]: ../../reference/glossary.md#receipt ### Overview The [rollup node][g-rollup-node] is the component responsible for [deriving the L2 chain][g-derivation] from L1 blocks (and their associated [receipts][g-receipts]). The part of the rollup node that derives the L2 chain is called the [rollup driver][g-rollup-driver]. This document is currently only concerned with the specification of the rollup driver. ### Driver The task of the [driver][g-rollup-driver] in the [rollup node][g-rollup-node] is to manage the [derivation][g-derivation] process: * Keep track of L1 head block * Keep track of the L2 chain sync progress * Iterate over the derivation steps as new inputs become available #### Derivation This process happens in three steps: 1. Select inputs from the L1 chain, on top of the last L2 block: a list of blocks, with transactions and associated data and receipts. 2. Read L1 information, deposits, and sequencing batches in order to generate [payload attributes][g-payload-attr] (essentially [a block without output properties][g-block]). 3. Pass the payload attributes to the [execution engine][g-exec-engine], so that the L2 block (including [output block properties][g-block]) may be computed. While this process is conceptually a pure function from the L1 chain to the L2 chain, it is in practice incremental. The L2 chain is extended whenever new L1 blocks are added to the L1 chain. Similarly, the L2 chain re-organizes whenever the L1 chain [re-organizes][g-reorg]. For a complete specification of the L2 block derivation, refer to the [L2 block derivation document](derivation.md). The rollup node RPC surface is specified in the [RPC](rpc.md) document. ### Protocol Version tracking The rollup-node should monitor the recommended and required protocol version by monitoring the Protocol Versions contract on L1. This can be implemented through polling in the [Driver](#driver) loop. After polling the Protocol Version, the rollup node SHOULD communicate it with the execution-engine through an [`engine_signalSuperchainV1`](../execution/index.md#enginesignalsuperchainv1) call. The rollup node SHOULD warn the user when the recommended version is newer than the current version supported by the rollup node. The rollup node SHOULD take safety precautions if it does not meet the required protocol version. This may include halting the engine, with consent of the rollup node operator. ## P2P ### Overview The [rollup node](index.md) has an optional peer-to-peer (P2P) network service to improve the latency between the view of sequencers and the rest of the network by bypassing the L1 in the happy case, without relying on a single centralized endpoint. This also enables faster historical sync to be bootstrapped by providing block headers to sync towards, and only having to compare the L2 chain inputs to the L1 data as compared to processing everything one block at a time. The rollup node will *always* prioritize L1 and reorganize to match the canonical chain. The L2 data retrieved via the P2P interface is strictly a speculative extension, also known as the "unsafe" chain, to improve the happy case performance. This also means that P2P behavior is a soft-rule: nodes keep each other in check with scoring and eventual banning of malicious peers by identity or IP. Any behavior on the P2P layer does not affect the rollup security, at worst nodes rely on higher-latency data from L1 to serve. In summary, the P2P stack looks like: * Discovery to find peers: [Discv5][discv5] * Connections, peering, transport security, multiplexing, gossip: [LibP2P][libp2p] * Application-layer publishing and validation of gossiped messages like L2 blocks. This document only specifies the composition and configuration of these network libraries. These components have their own standards, implementations in Go/Rust/Java/Nim/JS/more, and are adopted by several other blockchains, most notably the [L1 consensus layer (Eth2)][eth2-p2p]. ### P2P configuration #### Identification Nodes have a **separate** network- and consensus-identity. The network identity is a `secp256k1` key, used for both discovery and active LibP2P connections. Common representations of network identity: * `PeerID`: a LibP2P specific ID derived from the pubkey (through protobuf encoding, typing and hashing) * `NodeID`: a Discv5 specific ID derived from the pubkey (through hashing, used in the DHT) * `Multi-address`: an unsigned address, containing: IP, TCP port, PeerID * `ENR`: a signed record used for discovery, containing: IP, TCP port, UDP port, signature (pubkey can be derived) and L2 network identification. Generally encoded in base64. #### Discv5 ##### Consensus Layer Structure The Ethereum Node Record (ENR) for an Optimism rollup node must contain the following values, identified by unique keys: * An IPv4 address (`ip` field) and/or IPv6 address (`ip6` field). * A TCP port (`tcp` field) representing the local libp2p listening port. * A UDP port (`udp` field) representing the local discv5 listening port. * An OpStack (`opstack` field) L2 network identifier The `opstack` value is encoded as a single RLP `bytes` value, the concatenation of: * chain ID (`unsigned varint`) * fork ID (`unsigned varint`) Note that DiscV5 is a shared DHT (Distributed Hash Table): the L1 consensus and execution nodes, as well as testnet nodes, and even external IOT nodes, all communicate records in this large common DHT. This makes it more difficult to censor the discovery of node records. The discovery process in Optimism is a pipeline of node records: 1. Fill the table with `FINDNODES` if necessary (Performed by Discv5 library) 2. Pull additional records with searches to random Node IDs if necessary (e.g. iterate [`RandomNodes()`][discv5-random-nodes] in Go implementation) 3. Pull records from the DiscV5 module when looking for peers 4. Check if the record contains the `opstack` entry, verify it matches the chain ID and current or future fork number 5. If not already connected, and not recently disconnected or put on deny-list, attempt to dial. #### LibP2P ##### Transport TCP transport. Additional transports are supported by LibP2P, but not required. ##### Dialing Nodes should be publicly dialable, not rely on relay extensions, and able to dial both IPv4 and IPv6. ##### NAT The listening endpoint must be publicly facing, but may be configured behind a NAT. LibP2P will use PMP / UPNP based techniques to track the external IP of the node. It is recommended to disable the above if the external IP is static and configured manually. ##### Peer management The default is to maintain a peer count with a tide-system based on active peer count: * At "low tide" the node starts to actively search for additional peer connections. * At "high tide" the node starts to prune active connections, except those that are marked as trusted or have a grace period. Peers will have a grace period for a configurable amount of time after joining. In an emergency, when memory runs low, the node should start pruning more aggressively. Peer records can be persisted to disk to quickly reconnect with known peers after restarting the rollup node. The discovery process feeds the peerstore with peer records to connect to, tagged with a time-to-live (TTL). The current P2P processes do not require selective topic-specific peer connections, other than filtering for the basic network participation requirement. Peers may be banned if their performance score is too low, or if an objectively malicious action was detected. Banned peers will be persisted to the same data-store as the peerstore records. TODO: the connection gater does currently not gate by IP address on the dial Accept-callback. ##### Transport security [Libp2p-noise][libp2p-noise], `XX` handshake, with the `secp256k1` P2P identity, as popularized in Eth2. The TLS option is available as well, but `noise` should be prioritized in negotiation. ##### Protocol negotiation [Multistream-select 1.0][multistream-select] (`/multistream/1.0.0`) is an interactive protocol used to negotiate sub-protocols supported in LibP2P peers. Multistream-select 2.0 may be used in the future. ##### Identify LibP2P offers a minimal identification module to share client version and programming language. This is optional and can be disabled for enhanced privacy. It also includes the same protocol negotiation information, which can speed up initial connections. ##### Ping LibP2P includes a simple ping protocol to track latency between connections. This should be enabled to help provide insight into the network health. ##### Multiplexing For async communication over different channels over the same connection, multiplexing is used. [mplex][mplex] (`/mplex/6.7.0`) is required, and [yamux][yamux] (`/yamux/1.0.0`) is recommended but optional ##### GossipSub [GossipSub 1.1][gossipsub] (`/meshsub/1.1.0`, i.e. with peer-scoring extension) is a pubsub protocol for mesh-networks, deployed on L1 consensus (Eth2) and other protocols such as Filecoin, offering lots of customization options. ##### Content-based message identification Messages are deduplicated, and filtered through application-layer signature verification. Thus origin-stamping is disabled and published messages must only contain application data, enforced through a [`StrictNoSign` Signature Policy][signature-policy] This provides greater privacy, and allows sequencers (consensus identity) to maintain multiple network identities for redundancy. ##### Message compression and limits The application contents are compressed with [snappy][snappy] single-block-compression (as opposed to frame-compression), and constrained to 10 MiB. ##### Message ID computation [Same as L1][l1-message-id], with recognition of compression: * If `message.data` has a valid snappy decompression, set `message-id` to the first 20 bytes of the `SHA256` hash of the concatenation of `MESSAGE_DOMAIN_VALID_SNAPPY` with the snappy decompressed message data, i.e. `SHA256(MESSAGE_DOMAIN_VALID_SNAPPY + snappy_decompress(message.data))[:20]`. * Otherwise, set `message-id` to the first 20 bytes of the `SHA256` hash of the concatenation of `MESSAGE_DOMAIN_INVALID_SNAPPY` with the raw message data, i.e. `SHA256(MESSAGE_DOMAIN_INVALID_SNAPPY + message.data)[:20]`. ##### Heartbeat and parameters GossipSub [parameters][gossip-parameters]: * `D` (topic stable mesh target count): 8 * `D_low` (topic stable mesh low watermark): 6 * `D_high` (topic stable mesh high watermark): 12 * `D_lazy` (gossip target): 6 * `heartbeat_interval` (interval of heartbeat, in seconds): 0.5 * `fanout_ttl` (ttl for fanout maps for topics we are not subscribed to but have published to, in seconds): 24 * `mcache_len` (number of windows to retain full messages in cache for `IWANT` responses): 12 * `mcache_gossip` (number of windows to gossip about): 3 * `seen_ttl` (number of heartbeat intervals to retain message IDs): 130 (= 65 seconds) Notable differences from L1 consensus (Eth2): * `seen_ttl` does not need to cover a full L1 epoch (6.4 minutes), but rather just a small window covering latest blocks * `fanout_ttl`: adjusted to lower than `seen_ttl` * `mcache_len`: a larger number of heartbeats can be retained since the gossip is much less noisy. * `heartbeat_interval`: faster interval to reduce latency, bandwidth should still be reasonable since there are far fewer messages to gossip about each interval than on L1 which uses an interval of 0.7 seconds. ##### Topic configuration Topics have string identifiers and are communicated with messages and subscriptions. `/optimism/chain_id/hardfork_version/Name` * `chain_id`: replace with decimal representation of chain ID * `hardfork_version`: replace with decimal representation of hardfork, starting at `0` * `Name`: topic application-name Note that the topic encoding depends on the topic, unlike L1, since there are less topics, and all are snappy-compressed. ##### Topic validation To ensure only valid messages are relayed, and malicious peers get scored based on application behavior, an [extended validator][extended-validator] checks the message before it is relayed or processed. The extended validator emits one of the following validation signals: * `ACCEPT` valid, relayed to other peers and passed to local topic subscriber * `IGNORE` scored like inactivity, message is dropped and not processed * `REJECT` score penalties, message is dropped ### Gossip Topics Listed below are the topics for distributing blocks to other nodes faster than proxying through L1 would. These are: #### `blocksv1` Pre-Canyon/Shanghai blocks are broadcast on `/optimism//0/blocks`. #### `blocksv2` Canyon/Delta blocks are broadcast on `/optimism//1/blocks`. #### `blocksv3` Ecotone blocks are broadcast on `/optimism//2/blocks`. #### `blocksv4` Isthmus blocks are broadcast on `/optimism//3/blocks`. #### Block encoding A block is structured as the concatenation of: * V1 and V2 topics * `signature`: A `secp256k1` signature, always 65 bytes, `r (uint256), s (uint256), y_parity (uint8)` * `payload`: A SSZ-encoded `ExecutionPayload`, always the remaining bytes. * V3 topic * `signature`: A `secp256k1` signature, always 65 bytes, `r (uint256), s (uint256), y_parity (uint8)` * `parentBeaconBlockRoot`: L1 origin parent beacon block root, always 32 bytes * `payload`: A SSZ-encoded `ExecutionPayload`, always the remaining bytes. * V4 topic * `signature`: A `secp256k1` signature, always 65 bytes, `r (uint256), s (uint256), y_parity (uint8)` * `parentBeaconBlockRoot`: L1 origin parent beacon block root, always 32 bytes * `payload`: A SSZ-encoded `ExecutionPayload`, always the remaining bytes. * *Note* - the `ExecutionPayload` is modified for the first time in Isthmus. See ["Update to `ExecutionPayload`"](../../upgrades/isthmus/exec-engine.md#update-to-executionpayload) in the Isthmus spec. All topics use Snappy block-compression (i.e. no snappy frames): the above needs to be compressed after encoding, and decompressed before decoding. #### Block signatures The `signature` is a `secp256k1` signature, and signs over a message: `keccak256(domain ++ chain_id ++ payload_hash)`, where: * `domain` is 32 bytes, reserved for message types and versioning info. All zero for this signature. * `chain_id` is a big-endian encoded `uint256`. * `payload_hash` is `keccak256(payload)`, where `payload` is: * the `payload` in V1 and V2, * `parentBeaconBlockRoot ++ payload` in V3 + V4 (*NOTE*: In V4, `payload` is extended to include the `withdrawalsRoot`). The `secp256k1` signature must have `y_parity = 1 or 0`, the `chain_id` is already signed over. #### Block validation An [extended-validator] checks the incoming messages as follows, in order of operation: * `[REJECT]` if the compression is not valid * `[REJECT]` if the block encoding is not valid * `[REJECT]` if the `payload.timestamp` is older than 60 seconds in the past (graceful boundary for worst-case propagation and clock skew) * `[REJECT]` if the `payload.timestamp` is more than 5 seconds into the future * `[REJECT]` if the `block_hash` in the `payload` is not valid * `[REJECT]` if the block is on the V1 topic and has withdrawals * `[REJECT]` if the block is on the V1 topic and has a withdrawals list * `[REJECT]` if the block is on a `topic >= V2` and does not have an empty withdrawals list * `[REJECT]` if the block is on a `topic <= V2` and has a blob gas-used value set * `[REJECT]` if the block is on a `topic <= V2` and has an excess blob gas value set * `[REJECT]` if the block is on a `topic <= V2` and the parent beacon block root is not nil * `[REJECT]` if the block is on a `topic >= V3` and has a blob gas-used value that is not zero * `[REJECT]` if the block is on a `topic >= V3` and has an excess blob gas value that is not zero * `[REJECT]` if the block is on a `topic >= V3` and the parent beacon block root is nil * `[REJECT]` if the block is on a `topic <= V3` and the l2 withdrawals root is not nil * `[REJECT]` if the block is on a `topic >= V4` and the l2 withdrawals root is nil * `[REJECT]` if more than 5 different blocks have been seen with the same block height * `[IGNORE]` if the block has already been seen * `[REJECT]` if the signature by the sequencer is not valid * Mark the block as seen for the given block height The block is signed by the corresponding sequencer, to filter malicious messages. The sequencer model is singular but may change to multiple sequencers in the future. A default sequencer pubkey is distributed with rollup nodes and should be configurable. Note that blocks that a block may still be propagated even if the L1 already confirmed a different block. The local L1 view of the node may be wrong, and the time and signature validation will prevent spam. Hence, calling into the execution engine with a block lookup every propagation step is not worth the added delay. ##### Block processing A node may apply the block to their local engine ahead of L1 availability, if it ensures that: * The application of the block is reversible, in case of a conflict with delayed L1 information * The subsequent forkchoice-update ensures this block is recognized as "unsafe" (see [fork choice updated](derivation.md#engine-api-usage)) ##### Branch selection Nodes expect that the sequencer will not equivocate, and therefore the fork choice rule for unsafe blocks is a "first block wins" model, where the unsafe chain will not change once it has been extended, unless invalidated by safe data published to the L1. Nodes who see a different initial unsafe block will not reach consensus until the L1 is published, which resolves the disagreement. Because the L1 published data depends on the batcher's view of the data, the safe head will be based on whatever the batcher's source's unsafe head is. ##### Block topic scoring parameters TODO: GossipSub per-topic scoring to fine-tune incentives for ideal propagation delay and bandwidth usage. ### Req-Resp The op-node implements a similar request-response encoding for its sync protocols as the L1 ethereum Beacon-Chain. See [L1 P2P-interface req-resp specification][eth2-p2p-reqresp] and [Altair P2P update][eth2-p2p-altair-reqresp]. However, the protocol is simplified, to avoid several issues seen in L1: * Error strings in responses, if there is any alternative response, should not need to be compressed or have an artificial global length limit. * Payload lengths should be fixed-length: byte-by-byte uvarint reading from the underlying stream is undesired. * `` are relaxed to encode a `uint32`, rather than a beacon-chain `ForkDigest`. * Payload-encoding may change per hardfork, so is not part of the protocol-ID. * Usage of response-chunks is specific to the req-resp method: most basic req-resp does not need chunked responses. * Compression is encouraged to be part of the payload-encoding, specific to the req-resp method, where necessary: pings and such do not need streaming frame compression etc. And the protocol ID format follows the same scheme as L1, except the trailing encoding schema part, which is now message-specific: ```text /ProtocolPrefix/MessageName/SchemaVersion/ ``` The req-resp protocols served by the op-node all have `/ProtocolPrefix` set to `/opstack/req`. Individual methods may include the chain ID as part of the `/MessageName` segment, so it's immediately clear which chain the method applies to, if the communication is chain-specific. Other methods may include chain-information in the request and/or response data, such as the `ForkDigest` `` in L1 beacon chain req-resp protocols. Each segment starts with a `/`, and may contain multiple `/`, and the final protocol ID is suffixed with a `/`. #### `payload_by_number` This is an optional chain syncing method, to request/serve execution payloads by number. This serves as a method to fill gaps upon missed gossip, and sync short to medium ranges of unsafe L2 blocks. Protocol ID: `/opstack/req/payload_by_number//0/` * `/MessageName` is `/block_by_number/` where `` is set to the op-node L2 chain ID. * `/SchemaVersion` is `/0` Request format: ``: a little-endian `uint64` - the block number to request. Response format: ` = ` * `` is a byte code describing the result. * `0` on success, `` should follow. * `1` if valid request, but unavailable payload. * `2` if invalid request * `3+` if other error * The `>= 128` range is reserved for future use. * `` is a little-endian `uint32`, identifying the response type (fork-specific) * `` is an encoded block, read till stream EOF. The input of `` should be limited, as well as any generated decompressed output, to avoid unexpected resource usage or zip-bomb type attacks. A 10 MB limit is recommended, to ensure all blocks may be synced. Implementations may opt for a different limit, since this sync method is optional. `` list: * `0`: SSZ-encoded `ExecutionPayload`, with Snappy framing compression, matching the `ExecutionPayload` SSZ definition of the L1 Merge, L2 Bedrock, and L2 Canyon versions. * `1`: SSZ-encoded `ExecutionPayloadEnvelope` with Snappy framing compression, matching the `ExecutionPayloadEnvelope` SSZ definition of the L2 Ecotone version. * `2`: SSZ-encoded `ExecutionPayload` with Snappy framing compression, matching the `ExecutionPayload` SSZ definition of the L2 Isthmus version. The request is by block-number, enabling parallel fetching of a chain across many peers. A `res = 0` response should be verified to: * Have a block-number matching the requested block number. * Have a consistent `blockhash` w\.r.t. the other block contents. * Build towards a known canonical block. * This can be verified by checking if the parent-hash of a previous trusted canonical block matches that of the verified hash of the retrieved block. * For unsafe blocks this may be relaxed to verification against the parent-hash of any previously trusted block: * The gossip validation process limits the amount of blocks that may be trusted to sync towards. * The unsafe blocks should be queued for processing, the latest received L2 unsafe blocks should always override any previous chain, until the final L2 chain can be reproduced from L1 data. A `res > 0` response code should not be accepted. The result code is helpful for debugging, but the client should regard any error like any other unanswered request, as the responding peer cannot be trusted. *** [libp2p]: https://libp2p.io/ [discv5]: https://github.com/ethereum/devp2p/blob/master/discv5/discv5.md [discv5-random-nodes]: https://pkg.go.dev/github.com/ethereum/go-ethereum@v1.10.12/p2p/discover#UDPv5.RandomNodes [eth2-p2p]: https://github.com/ethereum/consensus-specs/blob/master/specs/phase0/p2p-interface.md [eth2-p2p-reqresp]: https://github.com/ethereum/consensus-specs/blob/master/specs/phase0/p2p-interface.md#the-reqresp-domain [eth2-p2p-altair-reqresp]: https://github.com/ethereum/consensus-specs/blob/master/specs/altair/p2p-interface.md#the-reqresp-domain [libp2p-noise]: https://github.com/libp2p/specs/tree/master/noise [multistream-select]: https://github.com/multiformats/multistream-select/ [mplex]: https://github.com/libp2p/specs/tree/master/mplex [yamux]: https://github.com/hashicorp/yamux/blob/master/spec.md [gossipsub]: https://github.com/libp2p/specs/blob/master/pubsub/gossipsub/gossipsub-v1.1.md [signature-policy]: https://github.com/libp2p/specs/blob/master/pubsub/README.md#signature-policy-options [snappy]: https://github.com/google/snappy [l1-message-id]: https://github.com/ethereum/consensus-specs/blob/master/specs/phase0/p2p-interface.md#topics-and-messages [gossip-parameters]: https://github.com/libp2p/specs/blob/master/pubsub/gossipsub/gossipsub-v1.0.md#parameters [extended-validator]: https://github.com/libp2p/specs/blob/master/pubsub/gossipsub/gossipsub-v1.1.md#extended-validators ## RPC ### L2 Output RPC method The Rollup node has its own RPC method, `optimism_outputAtBlock` which returns a 32 byte hash corresponding to the [L2 output root](../fault-proof/proposer.md#l2-output-commitment-construction). #### Structures These define the types used by rollup node API methods. The types defined here are extended from the [engine API specs][engine-structures]. ##### BlockID * `hash`: `DATA`, 32 Bytes * `number`: `QUANTITY`, 64 Bits ##### L1BlockRef * `hash`: `DATA`, 32 Bytes * `number`: `QUANTITY`, 64 Bits * `parentHash`: `DATA`, 32 Bytes * `timestamp`: `QUANTITY`, 64 Bits ##### L2BlockRef * `hash`: `DATA`, 32 Bytes * `number`: `QUANTITY`, 64 Bits * `parentHash`: `DATA`, 32 Bytes * `timestamp`: `QUANTITY`, 64 Bits * `l1origin`: `BlockID` * `sequenceNumber`: `QUANTITY`, 64 Bits - distance to first block of epoch ##### SyncStatus Represents a snapshot of the rollup driver. * `current_l1`: `Object` - instance of [`L1BlockRef`](#l1blockref). * `current_l1_finalized`: `Object` - instance of [`L1BlockRef`](#l1blockref). * `head_l1`: `Object` - instance of [`L1BlockRef`](#l1blockref). * `safe_l1`: `Object` - instance of [`L1BlockRef`](#l1blockref). * `finalized_l1`: `Object` - instance of [`L1BlockRef`](#l1blockref). * `unsafe_l2`: `Object` - instance of [`L2BlockRef`](#l2blockref). * `safe_l2`: `Object` - instance of [`L2BlockRef`](#l2blockref). * `finalized_l2`: `Object` - instance of [`L2BlockRef`](#l2blockref). * `pending_safe_l2`: `Object` - instance of [`L2BlockRef`](#l2blockref). * `queued_unsafe_l2`: `Object` - instance of [`L2BlockRef`](#l2blockref). #### Output Method API The input and return types here are as defined by the [engine API specs][engine-structures]. [engine-structures]: https://github.com/ethereum/execution-apis/blob/main/src/engine/paris.md#structures * method: `optimism_outputAtBlock` * params: 1. `blockNumber`: `QUANTITY`, 64 bits - L2 integer block number. * returns: 1. `version`: `DATA`, 32 Bytes - the output root version number, beginning with 0. 2. `outputRoot`: `DATA`, 32 Bytes - the output root. 3. `blockRef`: `Object` - instance of [`L2BlockRef`](#l2blockref). 4. `withdrawalStorageRoot`: 32 bytes - storage root of the `L2toL1MessagePasser` contract. 5. `stateRoot`: `DATA`: 32 bytes - the state root. 6. `syncStatus`: `Object` - instance of [`SyncStatus`](#syncstatus). ## Standard Bridges ### Overview The standard bridges are responsible for allowing cross domain ETH and ERC20 token transfers. They are built on top of the cross domain messenger contracts and give a standard interface for depositing tokens. The bridge works for both L1 native tokens and L2 native tokens. The legacy API is preserved to ensure that existing applications will not experience any problems with the Bedrock `StandardBridge` contracts. The `L2StandardBridge` is a predeploy contract located at `0x4200000000000000000000000000000000000010`. ```solidity interface StandardBridge { event ERC20BridgeFinalized(address indexed localToken, address indexed remoteToken, address indexed from, address to, uint256 amount, bytes extraData); event ERC20BridgeInitiated(address indexed localToken, address indexed remoteToken, address indexed from, address to, uint256 amount, bytes extraData); event ETHBridgeFinalized(address indexed from, address indexed to, uint256 amount, bytes extraData); event ETHBridgeInitiated(address indexed from, address indexed to, uint256 amount, bytes extraData); function bridgeERC20(address _localToken, address _remoteToken, uint256 _amount, uint32 _minGasLimit, bytes memory _extraData) external; function bridgeERC20To(address _localToken, address _remoteToken, address _to, uint256 _amount, uint32 _minGasLimit, bytes memory _extraData) external; function bridgeETH(uint32 _minGasLimit, bytes memory _extraData) payable external; function bridgeETHTo(address _to, uint32 _minGasLimit, bytes memory _extraData) payable external; function deposits(address, address) view external returns (uint256); function finalizeBridgeERC20(address _localToken, address _remoteToken, address _from, address _to, uint256 _amount, bytes memory _extraData) external; function finalizeBridgeETH(address _from, address _to, uint256 _amount, bytes memory _extraData) payable external; function messenger() view external returns (address); function OTHER_BRIDGE() view external returns (address); } ``` ### Token Depositing The `bridgeERC20` function is used to send a token from one domain to another domain. An `OptimismMintableERC20` token contract must exist on the remote domain to be able to deposit tokens to that domain. One of these tokens can be deployed using the `OptimismMintableERC20Factory` contract. ### Upgradability Both the L1 and L2 standard bridges should be behind upgradable proxies. ## Deposits [g-transaction-type]: ../../reference/glossary.md#transaction-type [g-derivation]: ../../reference/glossary.md#L2-chain-derivation [g-deposited]: ../../reference/glossary.md#deposited [g-deposits]: ../../reference/glossary.md#deposits [g-l1-attr-deposit]: ../../reference/glossary.md#l1-attributes-deposited-transaction [g-user-deposited]: ../../reference/glossary.md#user-deposited-transaction [g-eoa]: ../../reference/glossary.md#eoa [g-exec-engine]: ../../reference/glossary.md#execution-engine ### Overview [Deposited transactions][g-deposited], also known as [deposits][g-deposits] are transactions which are initiated on L1, and executed on L2. This document outlines a new [transaction type][g-transaction-type] for deposits. It also describes how deposits are initiated on L1, along with the authorization and validation conditions on L2. **Vocabulary note**: *deposited transaction* refers specifically to an L2 transaction, while *deposit* can refer to the transaction at various stages (for instance when it is deposited on L1). ### The Deposited Transaction Type [deposited-tx-type]: #the-deposited-transaction-type [Deposited transactions][g-deposited] have the following notable distinctions from existing transaction types: 1. They are derived from Layer 1 blocks, and must be included as part of the protocol. 2. They do not include signature validation (see [User-Deposited Transactions][user-deposited] for the rationale). 3. They buy their L2 gas on L1 and, as such, the L2 gas is not refundable. We define a new [EIP-2718] compatible transaction type with the prefix `0x7E` to represent a deposit transaction. A deposit has the following fields (rlp encoded in the order they appear here): [EIP-2718]: https://eips.ethereum.org/EIPS/eip-2718 * `bytes32 sourceHash`: the source-hash, uniquely identifies the origin of the deposit. * `address from`: The address of the sender account. * `address to`: The address of the recipient account, or the null (zero-length) address if the deposited transaction is a contract creation. * `uint256 mint`: The ETH value to mint on L2. * `uint256 value`: The ETH value to send to the recipient account. * `uint64 gas`: The gas limit for the L2 transaction. * `bool isSystemTx`: If true, the transaction does not interact with the L2 block gas pool. * This value is disabled and MUST be `false`. * `bytes data`: The calldata. In contrast to [EIP-155] transactions, this transaction type: * Does not include a `nonce`, since it is identified by the `sourceHash`. API responses still include a `nonce` attribute, set to the `depositNonce` value from the corresponding transaction receipt. * Does not include signature information, and makes the `from` address explicit. API responses contain zeroed signature `v`, `r`, `s` values for backwards compatibility. * Includes new `sourceHash`, `from`, `mint`, and `isSystemTx` attributes. API responses contain these as additional fields. [EIP-155]: https://eips.ethereum.org/EIPS/eip-155 We select `0x7E` because transaction type identifiers are currently allowed to go up to `0x7F`. Picking a high identifier minimizes the risk that the identifier will be used by another transaction type on the L1 chain in the future. We don't pick `0x7F` itself in case it becomes used for a variable-length encoding scheme. #### Source hash computation The `sourceHash` of a deposit transaction is computed based on the origin: * User-deposited: `keccak256(bytes32(uint256(0)), keccak256(l1BlockHash, bytes32(uint256(l1LogIndex))))`. Where the `l1BlockHash`, and `l1LogIndex` all refer to the inclusion of the deposit log event on L1. `l1LogIndex` is the index of the deposit event log in the combined list of log events of the block. * L1 attributes deposited: `keccak256(bytes32(uint256(1)), keccak256(l1BlockHash, bytes32(uint256(seqNumber))))`. Where `l1BlockHash` refers to the L1 block hash of which the info attributes are deposited. And `seqNumber = l2BlockNum - l2EpochStartBlockNum`, where `l2BlockNum` is the L2 block number of the inclusion of the deposit tx in L2, and `l2EpochStartBlockNum` is the L2 block number of the first L2 block in the epoch. * Upgrade-deposited: `keccak256(bytes32(uint256(2)), keccak256(intent))`. Where `intent` is a UTF-8 byte string, identifying the upgrade intent. Without a `sourceHash` in a deposit, two different deposited transactions could have the same exact hash. The outer `keccak256` hashes the actual uniquely identifying information with a domain, to avoid collisions between different types of sources. The [Interop derivation spec](../consensus/derivation.md) introduces two additional kinds of system deposits, with domains `3` and `4`. We do not use the sender's nonce to ensure uniqueness because this would require an extra L2 EVM state read from the [execution engine][g-exec-engine] during block-derivation. #### Kinds of Deposited Transactions Although we define only one new transaction type, we can distinguish between two kinds of deposited transactions, based on their positioning in the L2 block: 1. The first transaction MUST be a [L1 attributes deposited transaction][l1-attr-deposit], followed by 2. an array of zero-or-more [user-deposited transactions][user-deposited] submitted to the deposit feed contract on L1 (called `OptimismPortal`). User-deposited transactions are only present in the first block of a L2 epoch. We only define a single new transaction type in order to minimize modifications to L1 client software, and complexity in general. #### Validation and Authorization of Deposited Transactions As noted above, the deposited transaction type does not include a signature for validation. Rather, authorization is handled by the [L2 chain derivation][g-derivation] process, which when correctly applied will only derive transactions with a `from` address attested to by the logs of the [L1 deposit contract][deposit-contract]. #### Execution In order to execute a deposited transaction: First, the balance of the `from` account MUST be increased by the amount of `mint`. This is unconditional, and does not revert on deposit failure. Then, the execution environment for a deposited transaction is initialized based on the transaction's attributes, in exactly the same manner as it would be for an EIP-155 transaction. The deposit transaction is processed exactly like a type-2 (EIP-1559) transaction, with the exception of: * No fee fields are verified: the deposit does not have any, as it pays for gas on L1. * No `nonce` field is verified: the deposit does not have any, it's uniquely identified by its `sourceHash`. * No access-list is processed: the deposit has no access-list, and it is thus processed as if the access-list is empty. * No check if `from` is an Externally Owner Account (EOA): the deposit is ensured not to be an EOA through L1 address masking, this may change in future L1 contract-deployments to e.g. enable an account-abstraction like mechanism. * No gas is refunded as ETH. (either by not refunding or utilizing the fact the gas-price of the deposit is `0`) * No transaction priority fee is charged. No payment is made to the block fee-recipient. * No L1-cost fee is charged, as deposits are derived from L1 and do not have to be submitted as data back to it. * No base fee is charged. The total base fee accounting does not change. Note that this includes contract-deployment behavior like with regular transactions, and gas metering is the same (with the exception of fee related changes above), including metering of intrinsic gas. Any non-EVM state-transition error emitted by the EVM execution is processed in a special way: * It is transformed into an EVM-error: i.e. the deposit will always be included, but its receipt will indicate a failure if it runs into a non-EVM state-transition error, e.g. failure to transfer the specified `value` amount of ETH due to insufficient account-balance. * The world state is rolled back to the start of the EVM processing, after the minting part of the deposit. * The `nonce` of `from` in the world state is incremented by 1, making the error equivalent to a native EVM failure. Note that a previous `nonce` increment may have happened during EVM processing, but this would be rolled back first. Finally, after the above processing, the execution post-processing runs the same: i.e. the gas pool and receipt are processed identical to a regular transaction. The receipt of deposit transactions is extended with an additional `depositNonce` value, storing the `nonce` value of the `from` sender as registered *before* the EVM processing. Note that the gas used as stated by the execution output is subtracted from the gas pool. Note for application developers: because `CALLER` and `ORIGIN` are set to `from`, the semantics of using the `tx.origin == msg.sender` check will not work to determine whether or not a caller is an EOA during a deposit transaction. Instead, the check could only be useful for identifying the first call in the L2 deposit transaction. However this check does still satisfy the common case in which developers are using this check to ensure that the `CALLER` is unable to execute code before and after the call. ##### Nonce Handling Despite the lack of signature validation, we still increment the nonce of the `from` account when a deposit transaction is executed. In the context of a deposit-only roll up, this is not necessary for transaction ordering or replay prevention, however it maintains consistency with the use of nonces during [contract creation][create-nonce]. It may also simplify integration with downstream tooling (such as wallets and block explorers). [create-nonce]: https://github.com/ethereum/execution-specs/blob/617903a8f8d7b50cf71bf1aa733c37897c8d75c1/src/ethereum/frontier/utils/address.py#L40 ### Deposit Receipt Transaction receipts use standard typing as per [EIP-2718]. The Deposit transaction receipt type is equal to a regular receipt, but extended with an optional `depositNonce` field. The RLP-encoded consensus-enforced fields are: * `postStateOrStatus` (standard): this contains the transaction status, see [EIP-658]. * `cumulativeGasUsed` (standard): gas used in the block thus far, including this transaction. * The actual gas used is derived from the difference in `CumulativeGasUsed` with the previous transaction. * This accounts for the actual gas usage by the deposit, like regular transactions. * `bloom` (standard): bloom filter of the transaction logs. * `logs` (standard): log events emitted by the EVM processing. * `depositNonce` (unique extension): Optional field. The deposit transaction persists the nonce used during execution. * `depositNonceVersion` (unique extension): Optional field. The value must be 1 if the field is present * Before Canyon, these `depositNonce` & `depositNonceVersion` fields must always be omitted. * With Canyon, these `depositNonce` & `depositNonceVersion` fields must always be included. The receipt API responses utilize the receipt changes for more accurate response data: * The `depositNonce` is included in the receipt JSON data in API responses * For contract-deployments (when `to == null`), the `depositNonce` helps derive the correct `contractAddress` meta-data, instead of assuming the nonce was zero. * The `cumulativeGasUsed` accounts for the actual gas usage, as metered in the EVM processing. [EIP-658]: https://eips.ethereum.org/EIPS/eip-658 ### L1 Attributes Deposited Transaction [l1-attr-deposit]: #l1-attributes-deposited-transaction An [L1 attributes deposited transaction][g-l1-attr-deposit] is a deposit transaction sent to the [L1 attributes predeployed contract][predeploy]. This transaction MUST have the following values: 1. `from` is `0xdeaddeaddeaddeaddeaddeaddeaddeaddead0001` (the address of the [L1 Attributes depositor account][depositor-account]) 2. `to` is `0x4200000000000000000000000000000000000015` (the address of the [L1 attributes predeployed contract][predeploy]). 3. `mint` is `0` 4. `value` is `0` 5. `gasLimit` is set to `1,000,000`. 6. `isSystemTx` is set to `false`. 7. `data` is an encoded call to the [L1 attributes predeployed contract][predeploy] that depends on the upgrades that are active (see below). This system-initiated transaction for L1 attributes is not charged any ETH for its allocated `gasLimit`, as it is considered part of state-transition processing. #### L1 Attributes Deposited Transaction Calldata ##### L1 Attributes - Bedrock, Canyon, Delta The `data` field of the L1 attributes deposited transaction is an [ABI][ABI] encoded call to the `setL1BlockValues()` function with correct values associated with the corresponding L1 block (cf. [reference implementation][l1-attr-ref-implem]). ### Special Accounts on L2 The L1 attributes deposit transaction involves two special purpose accounts: 1. The L1 attributes depositor account 2. The L1 attributes predeployed contract #### L1 Attributes Depositor Account [depositor-account]: #l1-attributes-depositor-account The depositor account is an [EOA][g-eoa] with no known private key. It has the address `0xdeaddeaddeaddeaddeaddeaddeaddeaddead0001`. Its value is returned by the `CALLER` and `ORIGIN` opcodes during execution of the L1 attributes deposited transaction. #### L1 Attributes Predeployed Contract [predeploy]: #l1-attributes-predeployed-contract A predeployed contract on L2 at address `0x4200000000000000000000000000000000000015`, which holds certain block variables from the corresponding L1 block in storage, so that they may be accessed during the execution of the subsequent deposited transactions. The predeploy stores the following values: * L1 block attributes: * `number` (`uint64`) * `timestamp` (`uint64`) * `basefee` (`uint256`) * `hash` (`bytes32`) * `sequenceNumber` (`uint64`): This equals the L2 block number relative to the start of the epoch, i.e. the L2 block distance to the L2 block height that the L1 attributes last changed, and reset to 0 at the start of a new epoch. * System configurables tied to the L1 block, see [System configuration specification](../consensus/derivation.md#system-configuration): * `batcherHash` (`bytes32`): A versioned commitment to the batch-submitter(s) currently operating. * `overhead` (`uint256`): The L1 fee overhead to apply to L1 cost computation of transactions in this L2 block. * `scalar` (`uint256`): The L1 fee scalar to apply to L1 cost computation of transactions in this L2 block. The contract implements an authorization scheme, such that it only accepts state-changing calls from the [depositor account][depositor-account]. The contract has the following solidity interface, and can be interacted with according to the [contract ABI specification][ABI]. [ABI]: https://docs.soliditylang.org/en/v0.8.10/abi-spec.html ##### L1 Attributes Predeployed Contract: Reference Implementation [l1-attr-ref-implem]: #l1-attributes-predeployed-contract-reference-implementation A reference implementation of the L1 Attributes predeploy contract can be found in [L1Block.sol]. [L1Block.sol]: https://github.com/ethereum-optimism/optimism/blob/d48b45954c381f75a13e61312da68d84e9b41418/packages/contracts-bedrock/src/L2/L1Block.sol ### User-Deposited Transactions [user-deposited]: #user-deposited-transactions [User-deposited transactions][g-user-deposited] are [deposited transactions][deposited-tx-type] generated by the [L2 Chain Derivation][g-derivation] process. The content of each user-deposited transaction are determined by the corresponding `TransactionDeposited` event emitted by the [deposit contract][deposit-contract] on L1. 1. `from` is unchanged from the emitted value (though it may have been transformed to an alias in `OptimismPortal`, the deposit feed contract). 2. `to` is any 20-byte address (including the zero address) * In case of a contract creation (cf. `isCreation`), this address is set to `null`. 3. `mint` is set to the emitted value. 4. `value` is set to the emitted value. 5. `gaslimit` is unchanged from the emitted value. It must be at least 21000. 6. `isCreation` is set to `true` if the transaction is a contract creation, `false` otherwise. 7. `data` is unchanged from the emitted value. Depending on the value of `isCreation` it is handled as either calldata or contract initialization code. 8. `isSystemTx` is set by the rollup node for certain transactions that have unmetered execution. It is `false` for user deposited transactions #### Deposit Contract [deposit-contract]: #deposit-contract The deposit contract is deployed to L1. Deposited transactions are derived from the values in the `TransactionDeposited` event(s) emitted by the deposit contract. The deposit contract is responsible for maintaining the [guaranteed gas market](#guaranteed-gas-fee-market), charging deposits for gas to be used on L2, and ensuring that the total amount of guaranteed gas in a single L1 block does not exceed the L2 block gas limit. The deposit contract handles two special cases: 1. A contract creation deposit, which is indicated by setting the `isCreation` flag to `true`. In the event that the `to` address is non-zero, the contract will revert. 2. A call from a contract account, in which case the `from` value is transformed to its L2 [alias][address-aliasing]. ##### Address Aliasing [address-aliasing]: #address-aliasing If the caller is a contract, the address will be transformed by adding `0x1111000000000000000000000000000000001111` to it. The math is `unchecked` and done on a Solidity `uint160` so the value will overflow. This prevents attacks in which a contract on L1 has the same address as a contract on L2 but doesn't have the same code. We can safely ignore this for EOAs because they're guaranteed to have the same "code" (i.e. no code at all). This also makes it possible for users to interact with contracts on L2 even when the Sequencer is down. ##### Deposit Contract Implementation: Optimism Portal A reference implementation of the deposit contract can be found in [OptimismPortal.sol]. [OptimismPortal.sol]: https://github.com/ethereum-optimism/optimism/blob/d48b45954c381f75a13e61312da68d84e9b41418/packages/contracts-bedrock/src/L1/OptimismPortal.sol ### Guaranteed Gas Fee Market [Deposited transactions][g-deposited] are transactions on L2 that are initiated on L1. The gas that they use on L2 is bought on L1 via a gas burn (or a direct payment in the future). We maintain a fee market and hard cap on the amount of gas provided to all deposits in a single L1 block. The gas provided to deposited transactions is sometimes called "guaranteed gas". The gas provided to deposited transactions is unique in the regard that it is not refundable. It cannot be refunded as it is sometimes paid for with a gas burn and there may not be any ETH left to refund. The **guaranteed gas** is composed of a gas stipend, and of any guaranteed gas the user would like to purchase (on L1) on top of that. Guaranteed gas on L2 is bought in the following manner. An L2 gas price is calculated via an EIP-1559-style algorithm. The total amount of ETH required to buy that gas is then calculated as (`guaranteed gas * L2 deposit base fee`). The contract then accepts that amount of ETH (in a future upgrade) or (only method right now), burns an amount of L1 gas that corresponds to the L2 cost (`L2 cost / L1 base fee`). The L2 gas price for guaranteed gas is not synchronized with the base fee on L2 and will likely be different. #### Gas Stipend To offset the gas spent on the deposit event, we credit `gas spent * L1 base fee` ETH to the cost of the L2 gas, where `gas spent` is the amount of L1 gas spent processing the deposit. If the ETH value of this credit is greater than the ETH value of the requested guaranteed gas (`requested guaranteed gas * L2 gas price`), no L1 gas is burnt. #### Default Values | Variable | Value | | --------------------------------- | ---------------------------------------------- | | `MAX_RESOURCE_LIMIT` | 20,000,000 | | `ELASTICITY_MULTIPLIER` | 10 | | `BASE_FEE_MAX_CHANGE_DENOMINATOR` | 8 | | `MINIMUM_BASE_FEE` | 1 gwei | | `MAXIMUM_BASE_FEE` | type(uint128).max | | `SYSTEM_TX_MAX_GAS` | 1,000,000 | | `TARGET_RESOURCE_LIMIT` | `MAX_RESOURCE_LIMIT` / `ELASTICITY_MULTIPLIER` | #### Limiting Guaranteed Gas The total amount of guaranteed gas that can be bought in a single L1 block must be limited to prevent a denial of service attack against L2 as well as ensure the total amount of guaranteed gas stays below the L2 block gas limit. We set a guaranteed gas limit of `MAX_RESOURCE_LIMIT` gas per L1 block and a target of `MAX_RESOURCE_LIMIT` / `ELASTICITY_MULTIPLIER` gas per L1 block. These numbers enabled occasional large transactions while staying within our target and maximum gas usage on L2. Because the amount of guaranteed L2 gas that can be purchased in a single block is now limited, we implement an EIP-1559-style fee market to reduce congestion on deposits. By setting the limit at a multiple of the target, we enable deposits to temporarily use more L2 gas at a greater cost. ```python # Pseudocode to update the L2 deposit base fee and cap the amount of guaranteed gas # bought in a block. Calling code must handle the gas burn and validity checks on # the ability of the account to afford this gas. # prev_base fee is a u128, prev_bought_gas and prev_num are u64s prev_base_fee, prev_bought_gas, prev_num = now_num = block.number # Clamp the full base fee to a specific range. The minimum value in the range should be around 100-1000 # to enable faster responses in the base fee. This replaces the `max` mechanism in the ethereum 1559 # implementation (it also serves to enable the base fee to increase if it is very small). def clamp(v: i256, min: u128, max: u128) -> u128: if v < i256(min): return min elif v > i256(max): return max else: return u128(v) # If this is a new block, update the base fee and reset the total gas # If not, just update the total gas if prev_num == now_num: now_base_fee = prev_base_fee now_bought_gas = prev_bought_gas + requested_gas elif prev_num != now_num: # Width extension and conversion to signed integer math gas_used_delta = int128(prev_bought_gas) - int128(TARGET_RESOURCE_LIMIT) # Use truncating (round to 0) division - solidity's default. # Sign extend gas_used_delta & prev_base_fee to 256 bits to avoid overflows here. base_fee_per_gas_delta = prev_base_fee * gas_used_delta / TARGET_RESOURCE_LIMIT / BASE_FEE_MAX_CHANGE_DENOMINATOR now_base_fee_wide = prev_base_fee + base_fee_per_gas_delta now_base_fee = clamp(now_base_fee_wide, min=MINIMUM_BASE_FEE, max=UINT_128_MAX_VALUE) now_bought_gas = requested_gas # If we skipped multiple blocks between the previous block and now update the base fee again. # This is not exactly the same as iterating the above function, but quite close for reasonable # gas target values. It is also constant time wrt the number of missed blocks which is important # for keeping gas usage stable. if prev_num + 1 < now_num: n = now_num - prev_num - 1 # Apply 7/8 reduction to prev_base_fee for the n empty blocks in a row. now_base_fee_wide = now_base_fee * pow(1-(1/BASE_FEE_MAX_CHANGE_DENOMINATOR), n) now_base_fee = clamp(now_base_fee_wide, min=MINIMUM_BASE_FEE, max=type(uint128).max) require(now_bought_gas < MAX_RESOURCE_LIMIT) store_values(now_base_fee, now_bought_gas, now_num) ``` #### Rationale for burning L1 Gas There must be a sybil resistance mechanism for usage of the network. If it is very cheap to get guaranteed gas on L2, then it would be possible to spam the network. Burning a dynamic amount of gas on L1 acts as a sybil resistance mechanism as it becomes more expensive with more demand. If we collect ETH directly to pay for L2 gas, every (indirect) caller of the deposit function will need to be marked with the payable selector. This won't be possible for many existing projects. Unfortunately this is quite wasteful. As such, we will provide two options to buy L2 gas: 1. Burn L1 Gas 2. Send ETH to the Optimism Portal (Not yet supported) The payable version (Option 2) will likely have discount applied to it (or conversely, #1 has a premium applied to it). For the initial release of bedrock, only #1 is supported. #### On Preventing Griefing Attacks The cost of purchasing all of the deposit gas in every block must be expensive enough to prevent attackers from griefing all deposits to the network. An attacker would observe a deposit in the mempool and frontrun it with a deposit that purchases enough gas such that the other deposit reverts. The smaller the max resource limit is, the easier this attack is to pull off. This attack is mitigated by having a large resource limit as well as a large elasticity multiplier. This means that the target resource usage is kept small, giving a lot of room for the deposit base fee to rise when the max resource limit is being purchased. This attack should be too expensive to pull off in practice, but if an extremely wealthy adversary does decide to grief network deposits for an extended period of time, efforts will be placed to ensure that deposits are able to be processed on the network. ## Cross Domain Messengers ### Overview The cross domain messengers are responsible for providing a higher level API for developers who are interested in sending cross domain messages. They allow for the ability to replay cross domain messages and sit directly on top of the lower level system contracts responsible for cross domain messaging on L1 and L2. The `CrossDomainMessenger` is extended to create both an `L1CrossDomainMessenger` as well as a `L2CrossDomainMessenger`. These contracts are then extended with their legacy APIs to provide backwards compatibility for applications that integrated before the Bedrock system upgrade. The `L2CrossDomainMessenger` is a predeploy contract located at `0x4200000000000000000000000000000000000007`. The base `CrossDomainMessenger` interface is: ```solidity interface CrossDomainMessenger { event FailedRelayedMessage(bytes32 indexed msgHash); event RelayedMessage(bytes32 indexed msgHash); event SentMessage(address indexed target, address sender, bytes message, uint256 messageNonce, uint256 gasLimit); event SentMessageExtension1(address indexed sender, uint256 value); function MESSAGE_VERSION() external view returns (uint16); function MIN_GAS_CALLDATA_OVERHEAD() external view returns (uint64); function MIN_GAS_CONSTANT_OVERHEAD() external view returns (uint64); function MIN_GAS_DYNAMIC_OVERHEAD_DENOMINATOR() external view returns (uint64); function MIN_GAS_DYNAMIC_OVERHEAD_NUMERATOR() external view returns (uint64); function OTHER_MESSENGER() external view returns (address); function baseGas(bytes memory _message, uint32 _minGasLimit) external pure returns (uint64); function failedMessages(bytes32) external view returns (bool); function messageNonce() external view returns (uint256); function relayMessage( uint256 _nonce, address _sender, address _target, uint256 _value, uint256 _minGasLimit, bytes memory _message ) external payable returns (bytes memory returnData_); function sendMessage(address _target, bytes memory _message, uint32 _minGasLimit) external payable; function successfulMessages(bytes32) external view returns (bool); function xDomainMessageSender() external view returns (address); } ``` ### Message Passing The `sendMessage` function is used to send a cross domain message. To trigger the execution on the other side, the `relayMessage` function is called. Successful messages have their hash stored in the `successfulMessages` mapping while unsuccessful messages have their hash stored in the `failedMessages` mapping. The user experience when sending from L1 to L2 is a bit different than when sending a transaction from L2 to L1. When going from L1 into L2, the user does not need to call `relayMessage` on L2 themselves. The user pays for L2 gas on L1 and the transaction is automatically pulled into L2 where it is executed on L2. When going from L2 into L1, the user proves their withdrawal on OptimismPortal, then waits for the finalization window to pass, and then finalizes the withdrawal on the OptimismPortal, which calls `relayMessage` on the `L1CrossDomainMessenger` to finalize the withdrawal. ### Upgradability The L1 and L2 cross domain messengers should be deployed behind upgradable proxies. This will allow for updating the message version. ### Message Versioning Messages are versioned based on the first 2 bytes of their nonce. Depending on the version, messages can have a different serialization and hashing scheme. The first two bytes of the nonce are reserved for version metadata because a version field was not originally included in the messages themselves, but a `uint256` nonce is so large that we can very easily pack additional data into that field. #### Message Version 0 ```solidity abi.encodeWithSignature( "relayMessage(address,address,bytes,uint256)", _target, _sender, _message, _messageNonce ); ``` #### Message Version 1 ```solidity abi.encodeWithSignature( "relayMessage(uint256,address,address,uint256,uint256,bytes)", _nonce, _sender, _target, _value, _gasLimit, _data ); ``` ### Backwards Compatibility Notes An older version of the messenger contracts had the concept of blocked messages in a `blockedMessages` mapping. This functionality was removed from the messengers because a smart attacker could get around any message blocking attempts. It also saves gas on finalizing withdrawals. The concept of a "relay id" and the `relayedMessages` mapping was removed. It was built as a way to be able to fund third parties who relayed messages on the behalf of users, but it was improperly implemented as it was impossible to know if the relayed message actually succeeded. ## Withdrawals [g-deposits]: ../../reference/glossary.md#deposits [g-withdrawal]: ../../reference/glossary.md#withdrawal [g-relayer]: ../../reference/glossary.md#withdrawals [g-execution-engine]: ../../reference/glossary.md#execution-engine ### Overview [Withdrawals][g-withdrawal] are cross domain transactions which are initiated on L2, and finalized by a transaction executed on L1. Notably, withdrawals may be used by an L2 account to call an L1 contract, or to transfer ETH from an L2 account to an L1 account. **Vocabulary note**: *withdrawal* can refer to the transaction at various stages of the process, but we introduce more specific terms to differentiate: * A *withdrawal initiating transaction* refers specifically to a transaction on L2 sent to the Withdrawals predeploy. * A *withdrawal proving transaction* refers specifically to an L1 transaction which proves the withdrawal is correct (that it has been included in a merkle tree whose root is available on L1). * A *withdrawal finalizing transaction* refers specifically to an L1 transaction which finalizes and relays the withdrawal. Withdrawals are initiated on L2 via a call to the Message Passer predeploy contract, which records the important properties of the message in its storage. Withdrawals are proven on L1 via a call to the `OptimismPortal`, which proves the inclusion of this withdrawal message. Withdrawals are finalized on L1 via a call to the `OptimismPortal` contract, which verifies that the fault challenge period has passed since the withdrawal message has been proved. In this way, withdrawals are different from [deposits][g-deposits] which make use of a special transaction type in the [execution engine][g-execution-engine] client. Rather, withdrawals transaction must use smart contracts on L1 for finalization. ### Withdrawal Flow We first describe the end to end flow of initiating and finalizing a withdrawal: #### On L2 An L2 account sends a withdrawal message (and possibly also ETH) to the `L2ToL1MessagePasser` predeploy contract. This is a very simple contract that stores the hash of the withdrawal data. #### On L1 1. A [relayer][g-relayer] submits a withdrawal proving transaction with the required inputs to the `OptimismPortal` contract. The relayer is not necessarily the same entity which initiated the withdrawal on L2. These inputs include the withdrawal transaction data, inclusion proofs, and a block number. The block number must be one for which an L2 output root exists, which commits to the withdrawal as registered on L2. 2. The `OptimismPortal` contract retrieves the output root for the given block number from the `L2OutputOracle`'s `getL2Output()` function, and performs the remainder of the verification process internally. 3. If proof verification fails, the call reverts. Otherwise the hash is recorded to prevent it from being re-proven. Note that the withdrawal can be proven more than once if the corresponding output root changes. 4. After the withdrawal is proven, it enters a 7 day challenge period, allowing time for other network participants to challenge the integrity of the corresponding output root. 5. Once the challenge period has passed, a relayer submits a withdrawal finalizing transaction to the `OptimismPortal` contract. The relayer doesn't need to be the same entity that initiated the withdrawal on L2. 6. The `OptimismPortal` contract receives the withdrawal transaction data and verifies that the withdrawal has both been proven and passed the challenge period. 7. If the requirements are not met, the call reverts. Otherwise the call is forwarded, and the hash is recorded to prevent it from being replayed. ### The L2ToL1MessagePasser Contract A withdrawal is initiated by calling the L2ToL1MessagePasser contract's `initiateWithdrawal` function. The L2ToL1MessagePasser is a simple predeploy contract at `0x4200000000000000000000000000000000000016` which stores messages to be withdrawn. ```js interface L2ToL1MessagePasser { event MessagePassed( uint256 indexed nonce, // this is a global nonce value for all withdrawal messages address indexed sender, address indexed target, uint256 value, uint256 gasLimit, bytes data, bytes32 withdrawalHash ); event WithdrawerBalanceBurnt(uint256 indexed amount); function burn() external; function initiateWithdrawal(address _target, uint256 _gasLimit, bytes memory _data) payable external; function messageNonce() public view returns (uint256); function sentMessages(bytes32) view external returns (bool); } ``` The `MessagePassed` event includes all of the data that is hashed and stored in the `sentMessages` mapping, as well as the hash itself. #### Addresses are not Aliased on Withdrawals When a contract makes a deposit, the sender's address is [aliased](deposits.md#address-aliasing). The same is not true of withdrawals, which do not modify the sender's address. The difference is that: * on L2, the deposit sender's address is returned by the `CALLER` opcode, meaning a contract cannot easily tell if the call originated on L1 or L2, whereas * on L1, the withdrawal sender's address is accessed by calling the `l2Sender()` function on the `OptimismPortal` contract. Calling `l2Sender()` removes any ambiguity about which domain the call originated from. Still, developers will need to recognize that having the same address does not imply that a contract on L2 will behave the same as a contract on L1. ### The Optimism Portal Contract The Optimism Portal serves as both the entry and exit point to the Optimism L2. It is a contract which inherits from the [OptimismPortal](deposits.md#deposit-contract) contract, and in addition provides the following interface for withdrawals: * [`WithdrawalTransaction` type] * [`OutputRootProof` type] ```js interface OptimismPortal { event WithdrawalFinalized(bytes32 indexed withdrawalHash, bool success); function l2Sender() returns(address) external; function proveWithdrawalTransaction( Types.WithdrawalTransaction memory _tx, uint256 _l2OutputIndex, Types.OutputRootProof calldata _outputRootProof, bytes[] calldata _withdrawalProof ) external; function finalizeWithdrawalTransaction( Types.WithdrawalTransaction memory _tx ) external; } ``` ### Withdrawal Verification and Finalization The following inputs are required to prove and finalize a withdrawal: * Withdrawal transaction data: * `nonce`: Nonce for the provided message. * `sender`: Message sender address on L2. * `target`: Target address on L1. * `value`: ETH to send to the target. * `data`: Data to send to the target. * `gasLimit`: Gas to be forwarded to the target. * Proof and verification data: * `l2OutputIndex`: The index in the L2 outputs where the applicable output root may be found. * `outputRootProof`: Four `bytes32` values which are used to derive the output root. * `withdrawalProof`: An inclusion proof for the given withdrawal in the L2ToL1MessagePasser contract. These inputs must satisfy the following conditions: 1. The `l2OutputIndex` must be the index in the L2 outputs that contains the applicable output root. 2. `L2OutputOracle.getL2Output(l2OutputIndex)` returns a non-zero `OutputProposal`. 3. The keccak256 hash of the `outputRootProof` values is equal to the `outputRoot`. 4. The `withdrawalProof` is a valid inclusion proof demonstrating that a hash of the Withdrawal transaction data is contained in the storage of the L2ToL1MessagePasser contract on L2. ### Security Considerations #### Key Properties of Withdrawal Verification 1. It should not be possible to 'double spend' a withdrawal, ie. to relay a withdrawal on L1 which does not correspond to a message initiated on L2. For reference, see [this writeup][polygon-dbl-spend] of a vulnerability of this type found on Polygon. [polygon-dbl-spend]: https://gerhard-wagner.medium.com/double-spending-bug-in-polygons-plasma-bridge-2e0954ccadf1 2. For each withdrawal initiated on L2 (i.e. with a unique `messageNonce()`), the following properties must hold: 1. It should only be possible to prove the withdrawal once, unless the outputRoot for the withdrawal has changed. 2. It should only be possible to finalize the withdrawal once. 3. It should not be possible to relay the message with any of its fields modified, ie. 1. Modifying the `sender` field would enable a 'spoofing' attack. 2. Modifying the `target`, `data`, or `value` fields would enable an attacker to dangerously change the intended outcome of the withdrawal. 3. Modifying the `gasLimit` could make the cost of relaying too high, or allow the relayer to cause execution to fail (out of gas) in the `target`. #### Handling Successfully Verified Messages That Fail When Relayed If the execution of the relayed call fails in the `target` contract, it is unfortunately not possible to determine whether or not it was 'supposed' to fail, and whether or not it should be 'replayable'. For this reason, and to minimize complexity, we have not provided any replay functionality, this may be implemented in external utility contracts if desired. [`WithdrawalTransaction` type]: https://github.com/ethereum-optimism/optimism/blob/08daf8dbd38c9ffdbd18fc9a211c227606cdb0ad/packages/contracts-bedrock/src/libraries/Types.sol#L62-L69 [`OutputRootProof` type]: https://github.com/ethereum-optimism/optimism/blob/08daf8dbd38c9ffdbd18fc9a211c227606cdb0ad/packages/contracts-bedrock/src/libraries/Types.sol#L25-L30 #### OptimismPortal can send arbitrary messages on L1 The `L2ToL1MessagePasser` contract's `initiateWithdrawal` function accepts a `_target` address and `_data` bytes, which is passed to a `CALL` opcode on L1 when `finalizeWithdrawalTransaction` is called after the challenge period. This means that, by design, the `OptimismPortal` contract can be used to send arbitrary transactions on the L1, with the `OptimismPortal` as the `msg.sender`. This means users of the `OptimismPortal` contract should be careful what permissions they grant to the portal. For example, any ERC20 tokens mistakenly sent to the `OptimismPortal` contract are essentially lost, as they can be claimed by anybody that pre-approves transfers of this token out of the portal, using the L2 to initiate the approval and the L1 to prove and finalize the approval (after the challenge period).