op-alloy
Welcome to the hands-on guide for getting started with op-alloy
!
op-alloy
connects applications to the OP Stack, leveraging high
performance types, traits, and middleware from Alloy.
📖 Development Status
op-alloy
is in active development, and is not yet ready for use in production. During development, this book will evolve quickly and may contain inaccuracies.Please open an issue if you find any errors or have any suggestions for improvements, and also feel free to contribute to the project!
Sections
Getting Started
To get started with op-alloy, add its crates as a dependency and take your first steps.
Building with op-alloy
Walk through types and functionality available in different op-alloy
crates.
Examples
Get hands-on experience using op-alloy
crates for critical OP Stack functionality.
Contributing
Contributors are welcome! It is built and maintained by Alloy contributors, members of OP Labs, and the broader open source community.
op-alloy
follows and expands the OP Stack standards set in the specs.
The contributing guide breaks down how the specs
integrate with op-alloy
and how to contribute to op-alloy
.
Licensing
op-alloy
is licensed under the combined Apache 2.0 and MIT License, along
with a SNAPPY license for snappy encoding use.
Installation
op-alloy consists of a number of crates that provide a range of functionality essential for interfacing with any OP Stack chain.
The most succinct way to work with op-alloy
is to add the op-alloy
crate
with the full
feature flag from the command-line using Cargo.
cargo add op-alloy --features full
Alternatively, you can add the following to your Cargo.toml
file.
op-alloy = { version = "0.5", features = ["full"] }
For more fine-grained control over the features you wish to include, you can add the individual
crates to your Cargo.toml
file, or use the op-alloy
crate with the features you need.
After op-alloy
is added as a dependency, crates re-exported by op-alloy
are now available.
#![allow(unused)] fn main() { use op_alloy::{ genesis::{RollupConfig, SystemConfig}, consensus::OpBlock, protocol::BlockInfo, network::Optimism, provider::ext::engine::OpEngineApi, rpc_types::OpTransactionReceipt, rpc_jsonrpsee::traits::RollupNode, rpc_types_engine::OpAttributesWithParent, }; }
Features
The op-alloy
defines many feature flags including the following.
Default
std
k256
serde
Full enables the most commonly used crates.
full
The k256
feature flag enables the k256
feature on the op-alloy-consensus
crate.
k256
Arbitrary enables arbitrary features on crates, deriving the Arbitrary
trait on types.
arbitrary
Serde derives serde's Serialize and Deserialize traits on types.
serde
Additionally, individual crates can be enabled using their shorthand names.
For example, the consensus
feature flag provides the op-alloy-consensus
re-export
so op-alloy-consensus
types can be used from op-alloy
through op_alloy::consensus::InsertTypeHere
.
Crates
op-alloy-network
op-alloy-genesis
(supportsno_std
)op-alloy-protocol
(supportsno_std
)op-alloy-provider
op-alloy-consensus
(supportsno_std
)op-alloy-rpc-jsonrpsee
op-alloy-rpc-types
(supportsno_std
)op-alloy-rpc-types-engine
(supportsno_std
)
no_std
As noted above, the following crates are no_std
compatible.
To add no_std
support to a crate, ensure the check_no_std
script is updated to include this crate once no_std
compatible.
Building
This section offers in-depth documentation into the various op-alloy
crates.
Some of the primary crates and their types are listed below.
op-alloy-genesis
provides theRollupConfig
andSystemConfig
types.op-alloy-consensus
providesOpBlock
,OpTxEnvelope
,OpReceiptEnvelope
,Hardforks
, and more.op-alloy-rpc-types-engine
provides theOpPayloadAttributes
andOpAttributesWithParent
.op-alloy-protocol
providesFrame
,Channel
,Batch
types and more.
Genesis
The genesis crate contains types related to chain genesis.
This section contains in-depth sections on building with op-alloy-genesis
crate types.
Rollup Configs
Rollup configurations are a consensus construct used to configure an Optimism Consensus client. When an OP Stack chain is deployed into production or consensus nodes are configured to sync the chain, certain consensus parameters can be configured. These parameters are defined in the OP Stack specs.
Consensus parameters are consumed by OP Stack software through the RollupConfig
type defined in the
op-alloy-genesis
crate.
RollupConfig
Type
The RollupConfig
type is defined in op-alloy-genesis
.
A predefined rollup config can be loaded from a given L2 chain id using
the rollup_config_from_chain_id
method. An example is shown below.
#![allow(unused)] fn main() { use op_alloy_genesis::{OP_MAINNET_CONFIG, rollup_config_from_chain_id}; let op_mainnet_config = rollup_config_from_chain_id(10).expect("infallible"); assert_eq!(OP_MAINNET_CONFIG, op_mainnet_config); }
The OP_MAINNET_CONFIG
is one of the predefined rollup configs exported by
the op-alloy-genesis
crate. Other predefined configs include
the following.
OP_MAINNET_CONFIG
OP_SEPOLIA_CONFIG
BASE_MAINNET_CONFIG
BASE_SEPOLIA_CONFIG
System Config
The system configuration is a set of configurable chain parameters defined in a contract on L1. These parameters can be changed through the system config contract, emitting events that are picked up by the rollup node derivation process. To dive deeper into the System Config, visit the OP Stack Specifications.
SystemConfig
Type
The SystemConfig
type is defined in
op-alloy-genesis
.
Parameters defined in the SystemConfig
are expected to be
updated through L1 receipts, using the update_with_receipts
method.
Holocene Updates
The Holocene Hardfork introduced an update to the
SystemConfig
type, adding EIP-1559 parameters to the config.
The SystemConfig
type in op-alloy-genesis
provides
a method called eip_1559_params
that returns the EIP-1559 parameters
encoded as a B64
.
Consensus
The op-alloy-consensus
crate provides an Optimism consensus interface.
It contains constants, types, and functions for implementing Optimism EL
consensus and communication. This includes an extended OpTxEnvelope
type
with deposit transactions, and receipts containing OP Stack
specific fields (deposit_nonce
+ deposit_receipt_version
).
In general a type belongs in this crate if it exists in the
alloy-consensus
crate, but was modified from the base Ethereum protocol
in the OP Stack. For consensus types that are not modified by the OP Stack,
the alloy-consensus
types should be used instead.
Block
op-alloy-consensus
exports an Optimism block type, OpBlock
.
This type simply re-uses the alloy-consensus
block type, with OpTxEnvelope
as the type of transactions in the block.
Transactions
Optimism extends the Ethereum EIP-2718 transaction envelope to include a deposit variant.
OpTxEnvelope
The OpTxEnvelope
type is based on Alloy's
TxEnvelope
type.
Optimism modifies the TxEnvelope
to the following.
- Legacy
- EIP-2930
- EIP-1559
- EIP-7702
- Deposit
Deposit is a custom transaction type that is either an L1 attributes deposit transaction or a user-submitted deposit transaction. Read more about deposit transactions in the specs.
Transaction Types (OpTxType
)
The OpTxType
enumerates the transaction types using their byte identifier,
represents as a u8
in rust.
Receipt Types
Just like op-alloy-consensus
defines transaction types,
it also defines associated receipt types.
OpReceiptEnvelope
defines an Eip-2718 receipt envelope type
modified for the OP Stack. It contains the following variants - mapping
directly to the OpTxEnvelope
variants defined above.
- Legacy
- EIP-2930
- EIP-1559
- EIP-7702
- Deposit
There is also an OpDepositReceipt
type, extending the alloy receipt
type with a deposit nonce and deposit receipt version.
Hardforks
Aside from transactions and receipts, op-alloy-consensus
exports
one other core primitive called Hardforks
.
Hardforks provides hardfork transaction constructors - that is, it provides methods that return upgrade transactions for each hardfork. Some of these are the following.
RPC Engine Types
The op-alloy-rpc-types-engine
crate provides Optimism types for interfacing
with the Engine API in the OP Stack.
Optimism defines a custom payload attributes type called OpPayloadAttributes
.
OpPayloadAttributes
extends alloy's PayloadAttributes
with a few fields: transactions,
a flag for enabling the tx pool, the gas limit, and EIP 1559 parameters.
Wrapping OpPayloadAttributes
, the OpAttributesWithParent
type extends payload
attributes with the parent block (referenced as an [L2BlockInfo
][lbi]) and a flag
for whether the associated batch is the last batch in the span.
Optimism also returns a custom type for the engine_getPayload
request for both V3 and
V4 payload envelopes. These are the OpExecutionPayloadEnvelopeV3
and
OpExecutionPayloadEnvelopeV4
types, which both wrap payload envelope types
from alloy-rpc-types-engine
.
Protocol
The op-alloy-protocol
crate contains types, constants, and methods
specific to Optimism derivation and batch-submission.
op-alloy-protocol
supports no_std
.
Background
Protocol types are primarily used for L2 chain derivation. This section will
break down L2 chain derivation as it relates to types defined in
op-alloy-protocol
- that is, from the raw L2 chain data posted to L1, to the
Batch
type. And since the Batch
type naively breaks up
into the payload attributes, once executed, it becomes the canonical L2 block!
Note though, this provides an incredibly simplified introduction. It is advised
to reference the specs for the most up-to-date information regarding
derivation.
The L2 chain is derived from data posted to the L1 chain - either as calldata
or blob data. Data is iteratively pulled from each L1 block and translated
into the first type defined by op-alloy-protocol
: the Frame
type.
Frame
s are parsed from the raw data. Each Frame
is a part of a Channel
, the next type one level up in deriving
L2 blocks. Channel
s have IDs that frames reference. Frame
s
are added iteratively to the Channel
. Once a
Channel
is ready, it can be used to read a Batch
.
Since a Channel
stitches together frames, it contains the raw frame
data. In order to turn this Channel
data into a Batch
,
it needs to be decompressed using the respective (de)compression algorithm
(see the channel specs for more detail on this). Once
decompressed, the raw data can be decoded into the Batch
type.
Sections
Core Derviation Types (discussed above)
Other Critical Protocol Types
BlockInfo and L2BlockInfo Types
Optimism defines block info types that encapsulate minimal block header information needed by protocol operations.
BlockInfo
The BlockInfo
type is straightforward, containing the block hash,
number, parent hash, and timestamp.
L2BlockInfo
The L2BlockInfo
extends the BlockInfo
type for the canonical
L2 chain. It contains the "L1 origin" which is a set of block info for the L1
block that this L2 block "originated".
L2BlockInfo
provides a from_block_and_gensis
method to
construct the L2BlockInfo
from a block and ChainGenesis
.
Frames
Frame
s are the lowest level data format in the OP Stack protocol.
Where Frames fit in the OP Stack
Transactions posted to the data availability layer of the rollup contain one or multiple Frames. Frames are chunks of raw data that belong to a given Channel, the next, higher up data format in the OP Stack protocol. Importantly, a given transaction can contain a variety of frames from different channels, allowing maximum flexibility when breaking up channels into batcher transactions.
Contents of a Frame
A Frame is comprised of the following items.
- A
ChannelId
which is a 16 byte long identifier for the channel that the given frame belongs to. - A
number
that identifies the index of the frame within the channel. Frames are 0-indexed and are bound tou16
size limit. data
contains the raw data within the frame.is_last
marks if the frame is the last within the channel.
Frame Encoding
When frames are posted through a batcher transaction, they are encoded as a contiguous list with a single byte prefix denoting the derivation version. The encoding can be represented as the following concatenated bytes.
encoded = DERIVATION_VERSION_0 ++ encoded_frame_0 ++ encoded_frame_1 ++ ..
Where DERIVATION_VERSION_0
is a single byte (0x00
) indicating the derivation
version including how the frames are encoded. Currently, the only supported
derivation version is 0
.
encoded_frame_0
, encoded_frame_1
, and so on, are all Frame
s encoded
as raw bytes. A single encoded Frame
can be represented by the following
concatenation of it's fields.
encoded_frame = channel_id ++ frame_number ++ frame_data_length ++ frame_data ++ is_last
Where ++
represents concatenation. The frame's fields map to it's encoding.
channel_id
is the 16 byte longFrame::id
.frame_number
is the 2 byte long (oru16
)Frame::number
.frame_data_length
andframe_data
provide the necessary details to decode theFrame::data
, whereframe_data_length
is 4 bytes long (oru32
).is_last
is a single byteFrame::is_last
.
op-alloy's Frame
Type
op-alloy-protocol
provides the Frame
type with a few useful
methods. Frame
s can be encoded and decoded using the Frame::encode
and Frame::decode
methods. Given the raw batcher transaction data or blob data
containing the concatenated derivation version and contiguous list of encoded frames,
the Frame::parse_frame
and Frame::parse_frames
methods
provide ways to decode single and multiple frames, respectively.
Channels
Taken from the OP Stack specs, Channel
s are a set of
sequencer batches (for any L2 blocks) compressed together.
Where Channels fit in the OP Stack
L2 transactions are grouped into what are called sequencer batches. In order to obtain a better compression ratio when posting these L2 transactions to the data availability layer, sequencer batches are compressed together into what is called a Channel. This ultimately reduces data availability costs. As previously noted in the Frame section, Channels may not "fit" in a single batcher transaction, posting the data to the data availability layer. In order to accommodate large Channels, a tertiary Frame data type breaks the Channel up into multiple Frames where a batcher transaction then consists of one or multiple Frames.
Contents of a Channel
A Channel is comprised of the following items.
- A
ChannelId
which is a 16 byte long identifier for the channel. Notice, Frames also contain aChannelId
, which is the identical to this identifier, since frames "belong" to a given channel. - A
BlockInfo
that marks the L1 block at which the channel is "opened" at. - The estimated size of the channel (as a
usize
) used to drop the channel if there is a data overflow. - A
boolean
if the channel is "closed". This indicates if the last frame has been buffered, and added to the channel. - A
u16
indicating the highest frame number within the channel. - The frame number of the last frame (where
is_last
set totrue
). - A mapping from Frame number to the
Frame
itself. - A
BlockInfo
for highest L1 inclusion block that a frame was included in.
Channel Encoding
Channel
encoding is even more straightforward than that of a
Frame
. Simply, a Channel
is the concatenated list
of encoded Frame
s.
Since each Frame
contains the ChannelId
that corresponds
to the given Channel
, constructing a Channel
is as
simple as calling the Channel::add_frame
method for each of
its Frame
s.
Once the Channel
has ingested all of it's Frame
s,
it will be marked as "ready", with the Channel::is_ready
method returning true
.
The Channel
Type
As discussed above, the Channel
type is
expected to be populated with Frame
s using its
Channel::add_frame
method. Below we demonstrate constructing
a minimal Channel
using a few frames.
#![allow(unused)] fn main() { use op_alloy_protocol::{Channel, Frame}; // Construct a channel at the given L1 block. let id = [0xee; 16]; let block = BlockInfo::default(); let mut channel = Channel::new(id, block); // The channel will consist of 3 frames. let frame_0 = Frame { id: [0xee; 16], number: 0, ..Default::default() }; let frame_1 = Frame { id: [0xee; 16], number: 1, ..Default::default() }; let frame_2 = Frame { id: [0xee; 16], number: 2, is_last: true, ..Default::default() }; // Add the frames to the channel. channel.add_frame(frame_0); channel.add_frame(frame_1); channel.add_frame(frame_2); // Since the last frame was ingested, // the channel should be ready. assert!(channel.is_ready()); }
There are a few rules when adding a Frame
to a Channel
.
- The
Frame
's id must be the sameChannelId
as theChannel
s. Frame
s cannot be added once aChannel
is closed.Frame
s within aChannel
must have distinct numbers.
Notice, Frame
s can be added out-of-order so long as the Channel
is
still open, and the frame hasn't already been added.
Batches
A Batch contains a list of transactions to be included in a specific
L2 block. Since the Delta hardfork, there are two Batch types or
variants: SingleBatch
and SpanBatch
.
Where Batches fit in the OP Stack
The Batch is the highest-level data type in the OP Stack derivation process that comes prior to building payload attributes. A Batch is constructed by taking the raw data from a Channel, decompressing it, and decoding the Batch from this decompressed data.
Alternatively, when looking at the Batch type from a batching
perspective, and not from the derivation perspective, the Batch
type contains a list of L2 transactions and is compressed into the
Channel
type. In turn, the Channel
is split
into frames which are posted to the data availability layer through batcher
transactions.
Contents of a Batch
A Batch
is either a SingleBatch
or a
SpanBatch
, each with their own contents. Below,
these types are broken down in their respective sections.
SingleBatch
Type
The SingleBatch
type contains the following.
- A
BlockHash
parent hash that represents the parent L2 block. - A
u64
epoch number that identifies the epoch for this batch. - A
BlockHash
epoch hash. - The timestamp for the batch as a
u64
. - A list of EIP-2718 encoded transactions (represented as
Bytes
).
In order to validate the SingleBatch
once decoded,
the SingleBatch::check_batch
method should be used,
providing the rollup config, l1 blocks, l2 safe head, and inclusion block.
SpanBatch
Type
The SpanBatch
type (available since the Delta hardfork)
comprises the data needed to build a "span" of multiple L2 blocks. It contains
the following data.
- The parent check (the first 20 bytes of the block's parent hash).
- The l1 origin check (the first 20 bytes of the last block's l1 origin hash).
- The genesis timestamp.
- The chain id.
- A list of
SpanBatchElement
s. These are similar to theSingleBatch
type but don't contain the parent hash and epoch hash for this L2 block. - Origin bits.
- Block transaction counts.
- Span batch transactions which contain information for transactions in a span batch.
Similar to the SingleBatch
type discussed above, the SpanBatch
type
must be validated once decoded. For this, the SpanBatch::check_batch
method is available.
After the Holocene hardfork was introduced, span batch validation is greatly
simplified to be forwards-invalidating instead of backwards-invalidating, so a new
SpanBatch::check_batch_prefix
method provides a way to validate
each batch as it is loaded, in an iterative fashion.
Batch Encoding
The first byte of the decompressed channel data is the
BatchType
, which identifies whether the batch is a
SingleBatch
or a SpanBatch
.
From there, the respective type is decoded, and derived
in the case of the SpanBatch
.
The Batch
encoding format for the SingleBatch
is
broken down in the specs.
The Batch
Type
The Batch
type itself only provides two useful methods.
timestamp
returns the timestamp of theBatch
deocde
, constructs a newBatch
from the provided raw, decompressed batch data and rollup config.
Within each Batch
variant, the individual types contain
more functionality.
Examples
Examples for working with op-alloy-*
crates.
- Load a Rollup Config for a Chain ID
- Create a new L1BlockInfoTx Hardfork Variant
- Transform Frames to a Batch
- Transform a Batch to Frames
Loading a Rollup Config from a Chain ID
In this section, the code examples demonstrate loading the rollup config for the given L2 Chain ID.
Let's load the Rollup Config for OP Mainnet which hash chain id 10.
#![allow(unused)] fn main() { use op_alloy_genesis::{OP_MAINNET_CONFIG, rollup_config_from_chain_id}; // The chain id for OP Mainnet let op_mainnet_id = 10; // Load a rollup config from the chain id. let op_mainnet_config = rollup_config_from_chain_id(op_mainnet_id).expect("infallible"); // The chain id should match the hardcoded chain id. assert_eq!(OP_MAINNET_CONFIG, op_mainnet_config); }
⚠️ Available Configs
The
rollup_config_from_chain_id
method inop-alloy-genesis
uses hardcoded rollup configs. But, there are only a few of these hardcoded rollup configs inop-alloy-genesis
. This method and these configs are provided forno_std
environments where dynamic filesystem loading at runtime is not supported inno_std
environments.In a
std
environment, the superchain crate may be used which dynamically provides all rollup configs from the superchain-registry for their respective chain ids.
Create a L1BlockInfoTx Variant for a new Hardfork
This example walks through creating a variant of the L1BlockInfoTx
for a new Hardfork.
note
This example is very verbose.
To grok required changes, view this PR diff
which introduces Isthmus hardfork changes to the L1BlockInfoTx
with a new variant.
Required Genesis Updates
The first updates that need to be made are to op-alloy-genesis
types, namely the RollupConfig
and HardForkConfiguration
.
First, add a timestamp field to the RollupConfig
. Let's use the
hardfork name "Glacier" as an example.
#![allow(unused)] fn main() { pub struct RollupConfig { ... /// `glacier_time` sets the activation time for the Glacier network upgrade. /// Active if `glacier_time` != None && L2 block timestamp >= Some(glacier_time), inactive /// otherwise. #[cfg_attr(feature = "serde", serde(skip_serializing_if = "Option::is_none"))] pub glacier_time: Option<u64>, ... } }
Add an accessor on the RollupConfig
to provide a way of checking whether the
"Glacier" hardfork is active for a given timestamp. Also update the prior hardfork
accessor to call this method (let's use "Isthmus" as the prior hardfork).
#![allow(unused)] fn main() { /// Returns true if Isthmus is active at the given timestamp. pub fn is_isthmus_active(&self, timestamp: u64) -> bool { self.isthmus_time.map_or(false, |t| timestamp >= t) || self.is_glacier_active(timestamp) } /// Returns true if Glacier is active at the given timestamp. pub fn is_glacier_active(&self, timestamp: u64) -> bool { self.glacier_time.map_or(false, |t| timestamp >= t) } }
Lastly, add the "Glacier" timestamp to the HardForkConfiguration
.
#![allow(unused)] fn main() { pub struct HardForkConfiguration { ... /// Glacier hardfork activation time pub glacier_time: Option<u64>, } }
Protocol Changes
Introduce a new glacier.rs
module containing a L1BlockInfoGlacier
type
in op_alloy_genesis::info
module.
This should include a few methods used in the L1BlockInfoTx
later.
#![allow(unused)] fn main() { pub fn encode_calldata(&self) -> Bytes { ... } pub fn decode_calldata(r: &[u8]) -> Result<Self, DecodeError> { ... } }
Use other hardfork variants like the L1BlockInfoEcotone
for reference.
Next, add the new "Glacier" variant to the L1BlockInfoTx
.
#![allow(unused)] fn main() { pub enum L1BlockInfoTx { ... Glacier(L1BlockInfoGlacier) } }
Update L1BlockInfoTx::try_new
to construct the L1BlockInfoGlacier
if the hardfork is active using the RollupConfig::is_glacier_active
.
Also, be sure to update L1BlockInfoTx::decode_calldata
with the new variant decoding, as well as other L1BlockInfoTx
methods.
Once some tests are added surrounding the decoding and encoding of the new
L1BlockInfoGlacier
variant, all required changes are complete!
Now, this example PR diff introducing the Isthmus changes should
make sense, since it effectively implements the above changes for the Isthmus
hardfork (replacing "Glacier" with "Isthmus"). Notice, Isthmus introduces
some new "operator fee" fields as part of it's L1BlockInfoIsthmus
type.
Some new error variants to the BlockInfoError
are needed as well.
Transform Frames into a Batch
note
This example performs the reverse transformation as the batch-to-frames example.
caution
Steps and handling of types with respect to chain tip, ordering of frames, re-orgs, and
more are not covered by this example. This example solely demonstrates the most trivial
way to transform individual Frame
s into a Batch
type.
This example walks through transforming Frame
s into the Batch
types.
Walkthrough
The high level transformation is the following.
raw bytes[] -> frames[] -> channel -> decompressed channel data -> Batch
Given the raw, batch-submitted frame data as bytes (read in with the hex!
macro),
the first step is to decode the frame data into Frame
s using
Frame::decode
. Once all the Frame
s are decoded,
the Channel
can be constructed using the ChannelId
of the first frame.
note
Frame
s may also be added to a Channel
once decoded with the Channel::add_frame
method.
When the Channel
is Channel::is_ready()
,
the frame data can taken from the Channel
using
Channel::frame_data()
. This data is represented as Bytes
and needs to be decompressed using the respective compression algorithm depending on
which hardforks are activated (using the RollupConfig
). For the sake of this example,
brotli
is used (which was activated in the Fjord hardfork). Decompressed
brotli bytes can then be passed right into Batch::decode
to wind up with the example's desired Batch
.
Running this example:
- Clone the examples repository:
git clone git@github.com:alloy-rs/op-alloy.git
- Run:
cargo run --example frames_to_batch
//! This example decodes raw [Frame]s and reads them into a [Channel] and into a [SingleBatch]. use alloy_consensus::{SignableTransaction, TxEip1559}; use alloy_eips::eip2718::{Decodable2718, Encodable2718}; use alloy_primitives::{hex, Address, BlockHash, Bytes, PrimitiveSignature, U256}; use op_alloy_consensus::OpTxEnvelope; use op_alloy_genesis::RollupConfig; use op_alloy_protocol::{decompress_brotli, Batch, BlockInfo, Channel, Frame, SingleBatch}; fn main() { // Raw frame data taken from the `encode_channel` example. let first_frame = hex!("60d54f49b71978b1b09288af847b11d200000000004d1b1301f82f0f6c3734f4821cd090ef3979d71a98e7e483b1dccdd525024c0ef16f425c7b4976a7acc0c94a0514b72c096d4dcc52f0b22dae193c70c86d0790a304a08152c8250031d091063ea000"); let second_frame = hex!("60d54f49b71978b1b09288af847b11d2000100000046b00d00005082edde7ccf05bded2004462b5e80e1c42cd08e307f5baac723b22864cc6cd01ddde84efc7c018d7ada56c2fa8e3c5bedd494c3a7a884439d5771afcecaf196cb3801"); // Decode the raw frames. let decoded_first = Frame::decode(&first_frame).expect("decodes frame").1; let decoded_second = Frame::decode(&second_frame).expect("decodes frame").1; // Create a channel. let id = decoded_first.id; let open_block = BlockInfo::default(); let mut channel = Channel::new(id, open_block); // Add the frames to the channel. let l1_inclusion_block = BlockInfo::default(); channel.add_frame(decoded_first, l1_inclusion_block).expect("adds frame"); channel.add_frame(decoded_second, l1_inclusion_block).expect("adds frame"); // Get the frame data from the channel. let frame_data = channel.frame_data().expect("some frame data"); println!("Frame data: {}", hex::encode(&frame_data)); // Decompress the frame data with brotli. let config = RollupConfig::default(); let max = config.max_rlp_bytes_per_channel(open_block.timestamp) as usize; let decompressed = decompress_brotli(&frame_data, max).expect("decompresses brotli"); println!("Decompressed frame data: {}", hex::encode(&decompressed)); // Decode the single batch from the decompressed data. let batch = Batch::decode(&mut decompressed.as_slice(), &config).expect("batch decodes"); assert_eq!( batch, Batch::Single(SingleBatch { parent_hash: BlockHash::ZERO, epoch_num: 1, epoch_hash: BlockHash::ZERO, timestamp: 1, transactions: example_transactions(), }) ); println!("Successfully decoded frames into a Batch"); } fn example_transactions() -> Vec<Bytes> { let mut transactions = Vec::new(); // First Transaction in the batch. let tx = TxEip1559 { chain_id: 10u64, nonce: 2, max_fee_per_gas: 3, max_priority_fee_per_gas: 4, gas_limit: 5, to: Address::left_padding_from(&[6]).into(), value: U256::from(7_u64), input: vec![8].into(), access_list: Default::default(), }; let sig = PrimitiveSignature::test_signature(); let tx_signed = tx.into_signed(sig); let envelope: OpTxEnvelope = tx_signed.into(); let encoded = envelope.encoded_2718(); transactions.push(encoded.clone().into()); let mut slice = encoded.as_slice(); let decoded = OpTxEnvelope::decode_2718(&mut slice).unwrap(); assert!(matches!(decoded, OpTxEnvelope::Eip1559(_))); // Second transaction in the batch. let tx = TxEip1559 { chain_id: 10u64, nonce: 2, max_fee_per_gas: 3, max_priority_fee_per_gas: 4, gas_limit: 5, to: Address::left_padding_from(&[7]).into(), value: U256::from(7_u64), input: vec![8].into(), access_list: Default::default(), }; let sig = PrimitiveSignature::test_signature(); let tx_signed = tx.into_signed(sig); let envelope: OpTxEnvelope = tx_signed.into(); let encoded = envelope.encoded_2718(); transactions.push(encoded.clone().into()); let mut slice = encoded.as_slice(); let decoded = OpTxEnvelope::decode_2718(&mut slice).unwrap(); assert!(matches!(decoded, OpTxEnvelope::Eip1559(_))); transactions }
Transform a Batch into Frames
note
This example performs the reverse transformation as the frames-to-batch example.
caution
Steps and handling of types with respect to chain tip, ordering of frames, re-orgs, and
more are not covered by this example. This example solely demonstrates the most trivial
way to transform an individual Batch
into Frame
s.
This example walks through transforming a Batch
into Frame
s.
Effectively, this example demonstrates the encoding process from an L2 batch into the serialized bytes that are posted to the data availability layer.
Walkthrough
The high level transformation is the following.
Batch -> decompressed batch data -> ChannelOut -> frames[] -> bytes[]
Given the Batch
, the first step to encode the batch
using the Batch::encode()
method. The output bytes
need to then be compressed prior to adding them to the
ChannelOut
.
note
The ChannelOut
type also provides a method for adding
the Batch
itself, handling encoding and compression, but
this method is not available yet.
Once compressed using the compress_brotli
method, the
compressed bytes can be added to a newly constructed ChannelOut
.
As long as the ChannelOut
has ready_bytes()
,
Frame
s can be constructed using the
ChannelOut::output_frame()
method, specifying the maximum
frame size.
Once Frame
s are returned from the ChannelOut
,
they can be Frame::encode
into raw, serialized data
ready to be batch-submitted to the data-availability layer.
Running this example:
- Clone the examples repository:
git clone git@github.com:alloy-rs/op-alloy.git
- Run:
cargo run --example batch_to_frames
//! An example encoding and decoding a [SingleBatch]. //! //! This example demonstrates EIP-2718 encoding a [SingleBatch] //! through a [ChannelOut] and into individual [Frame]s. //! //! Notice, the raw batch is first _encoded_. //! Once encoded, it is compressed into raw data that the channel is constructed with. //! //! The [ChannelOut] then outputs frames individually using the maximum frame size, //! in this case hardcoded to 100, to construct the frames. //! //! Finally, once [Frame]s are built from the [ChannelOut], they are encoded and ready //! to be batch-submitted to the data availability layer. #[cfg(feature = "std")] fn main() { use alloy_primitives::BlockHash; use op_alloy_genesis::RollupConfig; use op_alloy_protocol::{Batch, ChannelId, ChannelOut, SingleBatch}; // Use the example transaction let transactions = example_transactions(); // Construct a basic `SingleBatch` let parent_hash = BlockHash::ZERO; let epoch_num = 1; let epoch_hash = BlockHash::ZERO; let timestamp = 1; let single_batch = SingleBatch { parent_hash, epoch_num, epoch_hash, timestamp, transactions }; let batch = Batch::Single(single_batch); // Create a new channel. let id = ChannelId::default(); let config = RollupConfig::default(); let mut channel_out = ChannelOut::new(id, &config); // Add the compressed batch to the `ChannelOut`. channel_out.add_batch(batch).unwrap(); // Output frames while channel_out.ready_bytes() > 0 { let frame = channel_out.output_frame(100).expect("outputs frame"); println!("Frame: {}", alloy_primitives::hex::encode(frame.encode())); if channel_out.ready_bytes() <= 100 { channel_out.close(); } } assert!(channel_out.closed); println!("Successfully encoded Batch to frames"); } #[cfg(feature = "std")] fn example_transactions() -> Vec<alloy_primitives::Bytes> { use alloy_consensus::{SignableTransaction, TxEip1559}; use alloy_eips::eip2718::{Decodable2718, Encodable2718}; use alloy_primitives::{Address, PrimitiveSignature, U256}; use op_alloy_consensus::OpTxEnvelope; let mut transactions = Vec::new(); // First Transaction in the batch. let tx = TxEip1559 { chain_id: 10u64, nonce: 2, max_fee_per_gas: 3, max_priority_fee_per_gas: 4, gas_limit: 5, to: Address::left_padding_from(&[6]).into(), value: U256::from(7_u64), input: vec![8].into(), access_list: Default::default(), }; let sig = PrimitiveSignature::test_signature(); let tx_signed = tx.into_signed(sig); let envelope: OpTxEnvelope = tx_signed.into(); let encoded = envelope.encoded_2718(); transactions.push(encoded.clone().into()); let mut slice = encoded.as_slice(); let decoded = OpTxEnvelope::decode_2718(&mut slice).unwrap(); assert!(matches!(decoded, OpTxEnvelope::Eip1559(_))); // Second transaction in the batch. let tx = TxEip1559 { chain_id: 10u64, nonce: 2, max_fee_per_gas: 3, max_priority_fee_per_gas: 4, gas_limit: 5, to: Address::left_padding_from(&[7]).into(), value: U256::from(7_u64), input: vec![8].into(), access_list: Default::default(), }; let sig = PrimitiveSignature::test_signature(); let tx_signed = tx.into_signed(sig); let envelope: OpTxEnvelope = tx_signed.into(); let encoded = envelope.encoded_2718(); transactions.push(encoded.clone().into()); let mut slice = encoded.as_slice(); let decoded = OpTxEnvelope::decode_2718(&mut slice).unwrap(); assert!(matches!(decoded, OpTxEnvelope::Eip1559(_))); transactions } #[cfg(not(feature = "std"))] fn main() { /* not implemented for no_std */ }
Contributing
Thank you for wanting to contribute! Before contributing to this repository, please read through this document and discuss the change you wish to make via issue.
Dependencies
Before working with this repository locally, you'll need to install a few dependencies:
- Just for our command-runner scripts.
- The Rust toolchain.
Optional
Pull Request Process
- Create an issue for any significant changes. Trivial changes may skip this step.
- Once the change is implemented, ensure that all checks are passing before creating a PR.
The full CI pipeline can be run locally via the
Justfile
s in the repository. - Be sure to update any documentation that has gone stale as a result of the change,
in the
README
files, the book, and in rustdoc comments. - Once your PR is approved by a maintainer, you may merge your pull request yourself if you have permissions to do so. Otherwise, the maintainer who approves your pull request will add it to the merge queue.
Working with OP Stack Specs
The OP Stack is a set of standardized open-source specifications that powers Optimism, developed by the Optimism Collective.
op-alloy
is a rust implementation of core OP Stack types, transports,
middleware and more. Not all types and implementation details in op-alloy
are present in the OP Stack specs, and on the flipside, not all
specifications are implemented by op-alloy
. That said, op-alloy
is
entirely based off of the specs, and new functionality or
core modifications to op-alloy
must be reflected in the specs.
As such, the first step for introducing changes to the OP Stack is to open a pr in the specs repository. These changes should target a protocol upgrade so that all implementations of the OP Stack are able to synchronize and implement the changes.
Once changes are merged in the OP Stack specs repo, they
may be added to op-alloy
in a backwards-compatible way such
that pre-upgrade functionality persists. The primary way to enable
backwards-compatibility is by using timestamp-based activation for
protocol upgrades.
Licensing
op-alloy is dually licensed under the Apache 2.0 and the MIT license.
The SNAPPY license is added for the use of snap in op-alloy-rpc-types-engine
.
Glossary
This document contains definitions for terms used throughout the op-alloy book.