4D Detailed Shared Sorter: Working Principle, Aggregation Theory and Vertical Integration

This paper analyzes the key technologies of shared sequencers: highly censorship-resistant, easy to deploy, interoperable, fast to determine, and instant. Aggregation theory provides a new perspective for it, and vertical integration guides its further development.

Original title: "The Shared Sequencer"

Written by: MAVEN11

Compilation: Kxp, BlockBeats

Imagine what it would be like if Rollup "out of the box" could achieve a high degree of censorship resistance, ease of deployment, interoperability, fast finality, liveness, and democratization of MEV. That might seem like a grand goal, but with the advent of the Shared Sequencer, it might soon become a reality. However, not all Rollups are the same, so we have to consider how to distribute the rewards and MEV on the shared sequencer network. In this paper, we explore how shared ranker networks are implemented, and the properties that can be achieved.

Shared Sequencer Networks was primarily introduced by Alex Beckett, later by Evan Forbes (and Radius) from the Celestia and Espresso s teams, and a new article by Jon Charbonneau that covers the topic in more depth. Josh, Jordan, and their Astria team are building the first production-ready shared sequencer network. Astria's Shared Orderer Network is a modular blockchain that aggregates and orders Rollup's transactions without executing them.

In Astria's setup, the sorter sends ordered blocks to the DA layer and also to the Rollup nodes. Rollups get soft finality guarantees from the orderer and hard finality guarantees from the DA layer (after blocks are finalized), and then they will execute valid transactions.

The shared sequencer network is essentially a group of Rollup-compatible sequencers, as its name suggests, it can provide services for different Rollups. This has various tradeoffs and properties, which we'll detail later. First, we must describe the most important properties of a sorter (or set of sorters). In Rollup, the main requirement for a sequencer or group of sequencers is censorship resistance or liveness (some of which comes from the base layer, as well as security). This means that a valid transaction submitted to the orderer must be included in the chain within a finite amount of time (timeout parameter). The shared orderer group only needs to ensure that transactions are included in blocks (i.e. crLists).

Satisfying censorship resistance and immediacy at the same time is quite difficult, as we outlined in Modular MEV Part II. In a consensus algorithm such as Tendermint, you can ensure recovery after an attack. However, in the event of an attack, you lose immediacy. Basically, requiring all other collators to sign a block, rather than electing a custom masternode, is probably not optimal. While this improves censorship resistance, it comes at the cost of "centralization" and MEV extraction to a single masternode. Another sorting mechanism available can be compared to Duality's Multiplicity, which is their little tool for non-masternodes (or sorters) to include other transactions into blocks. Overall, censorship resistance and immediacy after an attack is difficult to achieve in most consensus protocols.

Another consensus algorithm that may be used is HotStuff 2, which ensures immediacy in the event of an attack.

What it allows is to avoid waiting for a maximum network delay (timeout) in case of censorship or unsigned to elect a new masternode. The reason it could be an interesting consensus algorithm for a decentralized set of orderers is because it solves the immediacy problem in consensus without adding an extra stage. If the masternode knows the highest lock (the highest number of participants agreeing on a particular output), and can convince the honest parties, the problem is solved. If not, an honest masternode after a certain point can take charge of the push, assisting the next masternode. For example, a Hotstuff node does not need to acknowledge a switchover message before notifying a new master, but can directly switch to a new view and notify the new master.

The difference with Tendermint is that, although both are two-phased (Hotstuff1 has three, Hotstuff2 has two), Tendermint has linear communication but is not responsive, while Hotstuff2 is reactive. If there is a chain of honest masternodes, the protocol responds positively, since all steps except the first masternode's proposal depend on obtaining the amount of information from the previous step. In a shared orderer setting, this allows the protocol to achieve better immediacy without falling back to the bottom layer, while not canceling this possibility.

Construction of shared sorter groups

A set of orderers are allowed to submit transactions to the settlement layer (the layer where Rollup resides). You can join this collator group, provided certain requirements are met and the required number of block producers has not been reached. This optimizes latency, throughput, and more. These requirements are similar to those required to become a validator of a blockchain. For example, you have to meet certain hardware requirements, as well as an initial deposit, especially if you want to offer certainty with financial conditions.

A shared orderer group (or any decentralized orderer group) consists of several components that work together to ensure the correct processing of transactions, including:

  1. Provide JSON-RPC for each Rollup for transaction submission (for non-full node operators) to the node as a memory pool, and then build and sort. In the mempool, some mechanism is needed to determine the queue, as well as the transaction selection process, to ensure the efficient construction of blocks.

  2. Block/batch construction algorithm, responsible for processing transactions in the queue and converting them into blocks or batches. This step may also include compression to reduce the resulting block size (called data compression). As mentioned, this should be separate from Proposer, essentially PBS. Data can be compressed in a variety of ways, for example:

  • No RLP encoding - however, this may require a decentralized set of collators to help normalize data transfer between nodes, thus saving space.
  • Omit the nonce (unique number validating data in a particular block) - it can be recomputed at execution time by looking at the previous state of the chain.
  • Gas price simplification - set gas based on a fixed price range.
  • Gas simplification - In addition to the gas price, there is also a gas numbering system.
  • Replace address with index - Rollup can store an index of a mapped address instead of storing the full address.
  • Values expressed in scientific notation - value fields in Ethereum transactions are denominated in wei, so the values are large. You cannot omit numeric fields or reduce them to a fixed set of values. However, you can write it in scientific notation to optimize data storage:

  • Omission of data fields: Data fields are not required for simple transfers, but are required for more complex transactions.

  • Replace individual signatures with BLS aggregated signatures: Signatures are the largest component of Ethereum transactions. Instead of storing each signature, you can store BLS aggregated signatures for a specific number of transactions. You can also check the BLS aggregate signature against the message set and sender set to ensure its validity.

  • Use the From field as an index: Like the To field, you can use the From field as an index for mapping.

  • An interesting concept of "modular" design is that you can make adjustments and trade-offs as needed to make it work for your Rollup.

  1. The peer-to-peer layer will allow orderers to receive transactions from other orderers and propagate blocks after construction. This step is critical to ensure that the shared sequencer operates efficiently across multiple rollups.

  1. Master rotation algorithm for shared orderer sets (no consensus required in case of single master election). You can choose to only set the master node rotation algorithm, or take the multi-concurrent block producer route proposed by Duality.

  2. Consensus algorithms, such as the aforementioned Tendermint or Hotstuff2, can ensure that Rollup nodes agree with the sequence proposed by the ledger.

  3. RPC client for submitting blocks/batches to the underlying DA+ consensus layer so that blocks/batches can be safely added to the DA layer, ensuring "final" finality and making all transaction data available on-chain.

  4. Separate the roles of builders and block producers to ensure fairness and consistency, and avoid MEV theft. Also removes the performed sort, which is important to optimize efficiency, reduce PGA and increase CR.

*Rollup nodes receive ordered blocks from the sorter for soft submission, and receive ordered DA layer blocks for hard submission. *

Calldata is first published to the base network, where consensus is run to guarantee user and Rollup transactions. Then, the Rollup node executes the transaction and submits a state transition function to the canonical Rollup chain. A network of shared orderers provides Rollup with immediacy and censorship resistance. Rollups maintain their sovereignty because all transaction data is stored in the base layer, which allows them to fork from the shared orderer at any time. The state root of the Rollup state transition function (STF) is computed from the transaction roots (inputs) sent from the shared orderer to the DA layer. In Celestia, state roots are generated when data is added to the chain and consensus is reached. Since you already have the transaction root (and all data available), Celestia can provide light clients (Rollup nodes running on Celestia) with a small proof of inclusion.

In order to provide the user experience that users expect, Rollup nodes receive ordered blocks (which are also sent to the DA layer). This can provide Rollup with soft deterministic guarantees - guarantees that blocks will eventually be ordered in order on the DA layer, at which point Rollup nodes execute transactions and provide new state roots.

Block Creation and Slot

In order to determine the time of block creation, the sequencer needs to set the slot. The sequencer should submit batches at fixed intervals (typically X seconds), where X is the slot time. This ensures that transactions are processed promptly and efficiently, because otherwise the masternode for a particular slot would time out and lose the signing reward (and execution reward). For example, Celestia's block time (according to GitHub specs) is about 15 seconds. Since this is known, we can make some assumptions about how many "slots/blocks" we might have from the shared set of coorters to the DA layer and Rollup nodes get loaded into finalized blocks. In this regard we can refer to optimized Tendermint or Hotstuff2.

Within the same slot, we can submit multiple batches of transactions, provided the design includes mechanisms for Rollup full nodes to efficiently process them into a single block (within that time period and timeout parameters). This helps further optimize block creation and ensures transactions are processed quickly. Additionally, slots can also be used to facilitate the election of sorter master nodes. For example, you can randomly select a slot master node from the staking pool based on the staking weight. Doing this in a way that preserves confidentiality, and employs something like secret leader election to minimize censorship is the best option; or even a setup of distributed validator technology such as solutions like Obol/SSV. Latency and slot time have a big impact on block submissions and builds to the protocol. So we need to look at how this affects the system. Bloxroute has some great research and data points on Ethereum in particular. In MEV-Boost, participating block producers (validators or sequencers in the case of Rollup) request GetHeader from the relay. This gives them the block bid value, which is the value of a particular block. This may be the highest value block received each time. For each slot, validators typically request GetHeader about 400ms after the slot starts - due to the large number of validators, relays often need to submit numerous requests. This is also what can happen with large shared sorter groups. This means you need the infrastructure in place to facilitate this process.

Relays also help facilitate the separation of builders and block producers, while also verifying that the builders built the correct blocks. They also check that fees are paid correctly and act as DoS protection. Also, they are basically the custodians of the blocks and handle the registration of validators. This is especially important in the architecture of an unbounded sequencer, since you need to keep track of who participated and who didn't (for example, the synchronization layer discussed earlier).

Regarding block times (i.e. blocks submitted by creators), they typically occur around 200 milliseconds. They mostly start running before/after about 200 ms, though like GetHeader there is considerable variation. If the builder sends the block to multiple relays, it will cause considerable delay. Bloxroute also looked at what happens when blocks are sent to multiple relays. As you might expect, the delay for block propagation to more relays will be greater. On average, it took the second relay 99 milliseconds to spend the block, the third 122 milliseconds, and the fourth 342 milliseconds.

What we've probably learned over the past few months is that RPC is very important to blockchains. It's a huge burden without the proper infrastructure in place, and having a proper RPC option is also critical. In this case, RPC is important for retail investors sending their transactions to RPC (and the public mempool). Bloxroute ran a small test of 20 transactions sent to various RPCs and measured the time it took for each transaction to be included in a block.

Source: Bloxroute Labs

Interestingly, some RPCs do not include transactions until several blocks later, depending on which builder wins in the next block. If the RPC sends the transaction to more builders, then the probability of fast inclusion is higher. Although it is possible for transaction originators to leverage their unique position in order flow to target specific builders or even build their own blocks.

Their performance is also interesting in Ethereum's relay performance stats. This helps us gain a deeper understanding of how PBS works in a multiple validator/builder/relay world, which is what we hope to achieve with the Rollup upgrade. Metrika has some great stats on this and all data points are due to them.

It is important to note that missed bids occur when a relayer is expected to bid, but does not bid. Target expectations come from validators registered with a particular relay for any given slot. This isn't a relay fault per se, nor is it handled that way at the protocol level.

Source: app.metrika.co

If a fault occurs (such as a relay serving an invalid block), and it is responsible, it will be counted as a fault. These are usually infrequent and are mostly registration preference failures (i.e. gas limits or fees that don't match registrations for a particular validator). Even rarer are consensus layer failures, which are inconsistent with the Ethereum consensus layer rules, such as incorrect slots or parent hashes not aligned with the previous block, etc.

In terms of latency (such as the time it takes for a validator to receive a block header built by a builder), the data is very consistent. Although there are some outliers, such as the requested relay being in a different geographical location than the chosen validator.

Source: app.metrika.co

Regarding the builders, the total number of builders on MEV-boost is about 84, with the top three builders building about 65% of the built blocks. Although this may be somewhat misleading as these are also the longest running builders. The results are similar if the time frame is reduced. The number of actual active builders is much lower, 35 in the past 30 days and 24 in the past week. Competition is fierce, and usually the strongest builder wins. An exclusive order flow may already exist, which will only exacerbate the situation. We expect the distribution of builders to remain relatively centralized (since this is a race for optimal order flow and hardware optimization) unless we make major changes to the setup. While not a fundamental problem, it is still a centralizing force in the stack and we would love to hear ideas on how to challenge the status quo here. If you're interested in digging deeper into this (serious) problem, we highly recommend reading Quintus' articles on order flow, auctions, and centralization.

For the Builder role in the future modularity stack, we're pretty sure (at least in the Cosmos SDK setup) we'll see a Skip/Mekatek-like Builder Modules setup. Another solution is a SUAVE type setup, such as a specific global builder chain providing block building and bid preference services to any number of chains to ensure PBS. We'll explore this solution in more depth later, and provide some answers to questions not addressed here.

Regarding relays, we highly recommend reading an article by Ankit Chiplunkar of Frontier Research and Mike Neuder of the Ethereum Foundation called Optimistic relays and where to find them. This post details how relays in MEV-boost work, their current tradeoffs and operating costs, and some changes that may increase efficiency. Interestingly, running a relay in MEV-Boost currently costs around $100,000/year according to Flashbot estimates.

Deterministic

Before we talk about the determinism of modular blockchains (as they currently look), let’s take a look at our previous “Modular MEV” article. Note that this is not an "official" nor comprehensive view of finality, but we believe it most accurately represents the nuances of Rollup finality for ease of understanding.

Pending_On_L2: The Rollup orderer represents a soft commitment that a user's transaction will eventually be committed and finalized on the base layer of its security.

Finality_On_L2: The sequencer has committed to the Rollup's state transition function, and the block has been added to the Rollup's canonical chain.

Pending_On_L1: The input or output/state transition function of the transaction has been published to L1, but the validity proof has not yet been issued, or the arbitration period has not yet ended - this requires Ethereum for two consecutive epochs. This is the point at which most Optimistic Rollups say they have reached finality, but there is still an arbitrary 7-day challenge period at this point according to the spec cross-chain bridge.

Finality_On_L1: For an Optimistic Rollup, the arbitration period has ended, or a published and verified proof of validity has been confirmed by a supermajority in two consecutive epochs.

Now, in a Sovereign Shared Sort Rollup, this looks slightly different, let's try to explain it with a diagram:

In this case, theoretically we can get determinism on L1 before L2, etc.? Yes, in this case L2 is sovereign after all. This is assuming there are no fraud proofs and challenge periods, or you are using proof of validity.

So how do we achieve these levels of finality? Block finality is achieved when a block is added to the canonical chain, which cannot be withdrawn. However, there are some nuances here, depending on full or light nodes. In the case of an ordered block, it is deterministic once it is included in a DA layer block. Blocks (with state roots) are enforced by Rollup full nodes/validators, which gives them the guarantee of a valid state root derived from the ordered blocks of the base layer. For determinism beyond full nodes (such as for light clients or cross-chain bridges), you must be sure of the validity of this state root. Here, you can use one of the methods described below. Also, another approach is to make validators accountable for the correct proof of the state root (the Optimistic route), via a bond and subsequent proof of fraud. Additionally, you can provide a proof of validity (ZK).

Different ways to achieve block finality:

  1. Through Proof of Work (PoW), LMD+Ghost, Goldfish, Ouroboros (PoS) and other probabilistic methods.

  2. A provable method by means of sufficient committee members signing blocks. (eg Tendermint 2/3, Hotshot2 or other PBFT types)

  3. Depends on the ordering of transactions/blocks on the DA layer, and its rules, namely canonical chain and fork selection rules.

We can achieve different types of finality through different mechanisms.

One type of finality is "soft finality" (such as pending), which can be achieved by a single leader election. In this case, each slot will have only one or zero blocks (committed or not), and the synchronization layer can safely assume the sequence of transactions in these blocks.

Another type of finality is "provable finality", which provides stronger guarantees (essentially final) than soft finality. To achieve provable finality, a majority of orderers must sign a block, thereby expressing their agreement that the block is canonical. While this approach is nice, it may not be necessary if a single leader election has been implemented, since it essentially guarantees block ordering. Obviously, this depends on the particular leader election algorithm being implemented. For example, is it a 51% implementation, a 66% implementation, or a single leader (preferably random (VRF) and secret election). If you want to learn more about determinism in Ethereum, read this article we highly recommend, and the article we will recommend later for unbounded sorter sets.

licensed, semi-licensed or no-permitted

To prevent potential DoS attacks, economic barriers must be set in order to join the orderer group and submit transactions to the orderer layer. In bounded (finite number of sorters) and unbounded (unlimited number of sorters) groups, economic barriers must be put in place to submit batches to the DA layer to prevent the synchronization layer (propagating blocks between sorters) Get slowed down or under DDoS attack. However, the DA layer itself also provides some protection, because submitting data to it requires a cost (da_fee). The security deposit required to join the unbounded group should cover any additional costs necessary to prevent the sync layer from being spammed. On the other hand, the margin required to join a bounded group will depend on demand (balanced from a cost/revenue perspective).

For an unbounded set of sorters, we cannot achieve provable finality at the sorter layer (since we never know exactly how many active voters/signers there are). On the other hand, in a bounded group of coorters, provable finality can be achieved by a majority of coorters signing blocks. This does require the synchronization layer to be aware of the sequencer layer and how many sequencers are active at any given time, which is some additional overhead. In a bounded set of sorters (e.g. up to 100), you can also optimize the number of sorters to improve "performance", although at the expense of decentralization and censorship resistance. The importance of bounded groups and economic guarantees to provide "fast" provable certainty is also deterministic.

The types of unbounded sorter and bounded sorter are also reflected in traditional blockchains. For example, the PoS (Casper+LMD-GHOST) in Ethereum adopts unbounded type, while the chain based on Cosmos SDK/Tendermint adopts bounded type. An interesting thought is, do we expect to see proof-of-stake-like economics and options from the community around shared orderers? In this regard, we've seen a movement toward centralization of some entities (so unbounded doesn't really matter if you already have some large proof-of-stake providers/pools). Even though they "hide" centralization, after all, you can still mine if you want. From an ideological point of view, the choices should almost always be unbounded - but remember that economic principles make them very similar anyway. Regardless of who the participants are, the economics of what you pay should still be the same, such as the cost of DA and the cost of hardware (although this may be reduced by the number of proofs of stake you allocate and experience, and efficient operation of the infrastructure). Even in the bounded PoS world, we have seen a group of infrastructure providers become the largest and most common validators on almost all chains. On most Cosmos chains, the dependencies between validators are already very large, and this certainly poses a danger to the decentralization and censorship resistance of said chains. Still, a very different fact is that any retail investor can stake any amount with any validator they choose. Unfortunately, this usually gets assigned to the top of the list, and life goes on. We ask again: do we expect a similar economic model in a modular world? One wishes that wasn't the case, but with specialization you often need the best fit - and they tend to be professional proof-of-stake providers. We will also cover these economic issues in a separate chapter later.

However, one important thing to remember in all these issues is that the most important thing is end user authentication, which is available to anyone, no matter where they are (even in Giza) through light clients and DAS pyramid).

Source: @JosephALChami

Here are the trade-offs and advantages of bounded and unbounded sorters:

Unbounded sorter set:

*Anyone with enough bonds/staking can become a sorter = high degree of decentralization

  • It is not possible to have a single leader election, since the sorter is essentially infinite.
  • Non-single leader election via VRF is possible, but it is difficult to determine VRF parameters because it is not known how many orderers there will be. This should also be a secret leader election if possible to avoid DoS attacks.
  • No leader election = waste of resources Problem: Block building is essentially a free competition, whoever submits the first valid block/batch wins, and everyone else loses. · · · No provable certainty at the orderer level, only probabilistic: e.g. LMD Ghost+Casper
  • Finality is only achieved after batches are written to the DA layer (limited only by the underlying block time, 15 seconds in Celestia's case).
  • Unbounded sets are "better" censorship-resistant than bounded sets.

Bounded set of sorters:

This is one of Ethereum's solutions for single slot determinism, and having a super "majority" committee.

  • There is a limit to the number of sequencers allowed at any given time.
  • Bounded sets are more complex than unbounded sets.
  • A single leader election can be implemented, providing a strong deterministic guarantee for the sorter layer.
  • The synchronization layer needs to understand the set of orderers to determine which blocks are valid.
  • Writing sorter sets (or set changes) to settlement layer blocks (such as fork selection rules), which are written to the DA layer, allows the synchronization layer to independently determine the sorter set. For example, this is what Sovereign Labs' Rollup does, collection changes are written into a validity proof published to the DA layer.
  • The strong finality guarantees of the orderer layer may not be necessary if the DA layer is fast enough (however, most current non-optimized settlement layer setups have at least 10+ second block times).

There is considerable design room for how these sorter sets are monitored and new members are added or removed. For example, will this be achieved through Tokenholder Governance (how about many different Tokens and Rollups using collections then?). This means that signaling changes off-chain may be possible through social consensus (e.g., take Ethereum as an example). However, remember that the actual on-chain consensus is clearly established, and penalties for violating the consensus rules already exist.

Economic Mechanism for Shared Sorters

The economics of sharing a network of sequencers allow for some interesting options. As we discussed earlier, validators in a shared orderer network are not very different from typical L1 validators. The network it participates in is simply more optimized to perform one task, which is to receive intents (formerly PBS), and therefore propose and order transactions. Just like "regular" validators, there are revenue and cost components. On both sides of the equation, there is a lot of flexibility in the networks that validators participate in, similar to regular L1.

Revenue comes from users or Rollups they ultimately wish to interact with paying a certain fee for using the shared sequencer. This fee could be a percentage of MEV withdrawn (entering numbers can be difficult to approximate), cross-chain value transfers, Gas, or a flat fee per interaction. The most appropriate revenue solution might be to pay the shared orderer less value than the additional value gained through the Rollup shared orderer, along with the benefits of shared security and liquidity. But the downside of this is that it's hard to quantify the decentralization benefits of another part of the stack. However, as the shared orderer network grows into its own ecosystem, its ability to extract fees may increase. This is due in large part to their inherent ability to aggregate easily, with certain economies of scale. As more Rollups and applications join the network, more and more cross-domain MEVs will be able to extract.

In terms of cost, shared ordering networks also have competing options. They can easily fund their network usage by funding the cost of publishing on the DA layer, or even the cost of interacting with applications on Rollup. This is similar to the strategy used by Web 2.0 companies, where you take the initial loss on user acquisition (or rollup), hoping that their long-term revenue will outweigh the fees. Another more novel or Crypto-native method is to allow Rollup to pay DA fees with its native Token. Here, the shared sorter layer bears the pricing risk between the Token required to publish data on the DA layer and Rollup's native Token. In essence, it is still a shared sorter upfront cost, but it creates ecosystem consistency by obtaining the Token of the "supplier" (ie Rollup). This is somewhat similar to the warehouse construction we explained in the AppChain paper, and different forms of DA can be used to reduce costs. Different DA tiers will offer different pricing due to utilization, users' ability to easily verify through a light client, or simply make different block size choices. Finally, the shared orderer can also batch transactions before publishing to the DA layer. In the case of ZKR, this can reduce transaction costs through a certain number of transaction balances, and in terms of ORU, you can perform various batch processing Gas optimizations, which we can currently see on various Rollups. This will reduce the amount of data that needs to be published to the DA layer, thereby reducing the cost of the shared sequencer network and increasing the profitability of the entire network. This will come at the cost of limiting interoperability and changing block finality times (determinism on L1 as mentioned earlier).

Overall, the economics of sharing a network of sequencers allow for some interesting experimentation and bootstrapping strategies. We estimate that the key difference will be the size of the ecosystem, so the number of cross-domain MEVs is greater than the cost aspect. We also highly recommend checking out the Espresso team's blog post on shared orderers, they also cover the economic tradeoffs (and positives) of these types of networks. To show why Rollup is motivated to take advantage of shared sorters (besides economics), we can consider it from an aggregation theory perspective.

Aggregation Theory and Shared Sorters

Another way to describe the properties brought about by shared sorters is through the lens of aggregation theory. Aggregation theory refers to the concept of how a platform or aggregator integrates other platforms and their users in a systematic way to gain significant user attention. You're essentially moving the game from the allocation of a scarce resource (e.g., block space) to the need to control an abundant resource (again, block space makes sense in this example). Aggregation Theory actually aggregates suppliers and products (i.e. Rollup and Blockspace) into a super user experience to serve the aggregated user base. As the network effect of these aggregators grows, this relationship becomes increasingly exclusive. As this happens, user experience becomes a key differentiator between similar setups. If there are incentives to attract new users (such as a good user experience and better interoperability), it is unlikely that Rollup will move to its own network or a different setup - as network effects drive new suppliers and New users join. This creates a flywheel effect, not only from a provider and user perspective, but also from an aggregated censorship-resistant perspective.

Source: Aggregation Theory 2015, Ben Thompson

In the realm of shared sorters, Aggregation Theory can be seen as "combinations" and federations of various Rollups, all utilizing similar verticals of the stack - empowering themselves and others while allowing users to get the same experience.

Providers (such as Rollups) are theoretically not exclusive to the shared coorter set, but in practice the shared coorter set, its rollups, and users benefit from a series of loops of network effects that lead to increased usage of these rollups. These benefits make it easier for Rollups and users to integrate into a shared stack because they have more to lose if they don't participate. While these benefits can be hard to see when you only have two Rollups sharing a single sequencer set, they become clearer as you add more and more Rollups and Users into the equation. Shared sorter sets have a direct relationship to users, as they order their transactions, even if the users themselves don't know they are interacting with them, since from their perspective, they are just using Rollups that they have a reason to interact with ( This means ordering/sorters become exclusive). The only cost of these sorters is really the hardware cost to run them, as long as the block space and the tokens securing it are valuable to the end user. Transaction fees are digitized, paid out of users' wallets, and perhaps in the future, can even be abstracted through advancements such as payment hosts in account abstraction (however, someone will have to bear the cost of DA, ordering, and execution).

This makes more sense when considering Josh and Jordan's previous company in Astria - Google. Since its inception, Google products have been inspired by the idea of AT, notably in Google Search, which is created by modularizing individual pages and articles, making them directly accessible through the global search window.

In the case of a shared set of sorters, users who use Rollups (those who share a set of sorters) have lower and lower acquisition costs, because as the number of suppliers (Rollup) increases, they are likely to be attracted to the set . This means that, in most cases, an aggregator (or multi-aggregator) has a possible win-win effect, as the value of an aggregator increases with the number of suppliers (as long as the user experience is good, of course). In contrast, on a single serial network, customer acquisition is limited to a single network and its applications. If users want to use the Rollup application on a different Rollup, they will have to (within current limitations) log out of the network entirely. This means that user stickiness and value are not very high, and it also means that at any moment, if another Rollup ecosystem is highly valued (or has more incentives), capital may be lost.

Attributes and Tradeoffs Summary

Attributes

A shared sorter set is a rollup network that aggregates and sorts transactions for multiple rollups. These Rollups share the same sorter. This pooling of resources means that Rollups gain stronger economic security and anti-censorship capabilities, which can provide fast soft deterministic guarantees and conditional cross-Rollup transactions.

Right now, there's a lot of noise about atomicity among Rollups that share the same set of sorters, mostly around whether it's atomic by default - which it isn't. However, if the Rollups in question implement each other's state transition functions (STFs) as dependencies between them, involving conditional transactions, then they can indeed achieve atomicity between them - as long as their slot/blocktime Alignment (as with shared sorter sets). In this case, to get atomic interoperability, you really only need to run a light client of chain B on chain A and vice versa (similar to how IBC works). To further strengthen the interoperability of security measures (beyond trusting a single full node as a light node), you can implement ZKP (Proof of State) to solve the "oracle problem" of ensuring state correctness. This will make it clearer to see if a conditional transaction or something like that has hit a canonical cross-chain bridge between them. Fraud proofs are also a possibility, but would obviously leave a challenge period (meaning a third party would come along to cover the cost of this risk). Also, in the case of light clients (rather than full nodes), it will be at least one block behind due to waiting for signature header + fraud proof window (if any).

We believe that the "cross-chain bridge" problem is most likely to be solved with lightweight clients and zero-knowledge proofs. The challenge with using a lightweight client (rather than a smart contract) in this case is that hard forks (upgrades, etc.) on the Rollup node side need to coordinate with each other to keep their bridge running (just like IBC needs to enable the same state module ). If you want a deeper dive into this particular topic (and how to tackle it), we highly recommend checking out this presentation.

The reason shared orderers scale so well is that they do not execute and store any state (as centralized orderers do today). The same goes for the Rollup nodes themselves (unless they want atomicity between themselves - e.g. light client/proof of state). These nodes only execute transactions that are valid for their Rollup, and any conditional cross-domain transactions that are valid for them. If a Rollup node has to execute and store state for multiple Rollups, it hinders scalability and reduces decentralization (and thus censorship resistance). This also reinforces the concept of block producer-builder separation (PBS). Although we still need to completely separate builders and block producers. In the current setup, orderers are essentially a builder and block producer (although they do not execute transactions). An ideal setup might be one where the orderer exists only to blindly sign blocks from a highly optimized builder setup and ensure that blocks are implemented correctly (while providing a high degree of economic certainty and censorship resistance to that certification). In this way, they can provide a high degree of security and commitment to guarantee soft finality to Rollup nodes.

For cross-rollup conditional transactions, they also exist to help enable rollup nodes (executors) to provide intermediate state roots, enabling atomicity between rollups.

trade off

The aforementioned timeout parameter has some interesting effects on MEV and transaction inclusion, depending on the masternode/consensus mechanism of the orderer set. For example, if the timeout parameter is described as relatively short in our application-specific chain paper, it is critical that block producers of a decentralized set of orderers publish data as fast as possible. In such a world, you can see competition between "validators" who compete to be master nodes and bid on the DA layer for block space until it is no longer profitable.

As Evan covered in his original lazy orderer post on the Celestia forums, waiting for transactions to be published to the base layer (Celestia in this case) before executing them is very inefficient. Since now you are limited by the block time of the base layer - which is too long to wait for user experience. For a better user experience, the shared orderer provides Rollups with soft deterministic commitments (as described earlier), which provide users with the user experience they are used to in existing centralized Rollups (while maintaining decentralization and high censorship resistance). Soft commitments are essentially just commitments to the final order of transactions, but backed by heavy economic guarantees and fast finalization once issued. This is also covered by fraud proofs (as mentioned in the introduction). The actual hard finality is achieved after all transaction data has been published to the base layer (meaning L1 actually achieves faster finality). It depends on whether Rollup uses fraud proofs or zero-knowledge proofs for its proof-of-sovereignty verification - which happens on Rollup. The reason for wanting this separation is to remove the huge bottleneck of state transitions from the sorter. Instead, Rollup nodes only deal with nodes that are valid for them (meaning we have to add conditional transactions, proof of state, or light node validation for proper interoperability). Hard determinism still depends on the base layer (but this can reach 15 seconds on Celestia, and is also deterministic under Tendermint), which gives us relatively fast hard determinism guarantees on Rollup.

Use zero-knowledge proofs inside the network to optimize validation, transaction size (e.g. only publish state differences - but this does add a higher level of trust until the ZKP is published). As mentioned earlier, these proofs of state can be used to allow connected chains/rollups to have easier and faster interoperability (without waiting for challenge windows).

A downside of these conditional transactions is that they are likely to be more expensive, requiring higher verification and issuance costs (such as the cost of Tendermint block header verification, which is subsidized on the Cosmos chain), and adding some latency to the system (but still faster than Isolated Rollups are much faster). The atomicity achieved by vertically shared integration can compensate for these problems.

During the bootstrap phase of a new rollup, using a shared set of collators makes a lot of sense, and the advantages you gain as a provider will likely outweigh some of the tradeoffs you might be "forced" to make at the moat level. However, for an already mature Rollup, where there is a lot of transactions and economic activity, it probably doesn't make sense to give up part of the moat.

This begs the question of whether we need similar economic/transactional (per Rollup) weighted redistribution of extracted MEV to induce already mature Rollups to join the shared set - or even keep extremely mature Rollups from generating own network. This is all fairly theoretical, but it's undoubtedly an interesting thought process with regard to how MEVs in shared vertical worlds will be represented between many Rollups with varying levels of activity. For example, if a unique Rollup that drives value shares some of these profits with others (probably not bringing much "value") via the Sequencer Set, then there must be more reason for them to move into their own siled system . Sreeram by EigenLayr has some thoughts on this as well, which we recommend reading as well.

This becomes increasingly important when considering that there is a considerable technical cost for searchers to develop new chains, so standardizing and providing some sovereignty over "their" MEVs may be a good place to start. In practice, in MEV, the dominant interface (or software) may win out, but actually monetizing that software by running critical parts of the infrastructure is very difficult (leading to centralization). At the market level, what a shared orderer provides is basically a common mempool for multiple providers, with a centralized auction, which could lead to healthier competition.

A concern here is that if two Rollups are running sorters in the shared set, then a Rollup with "less economic" value (A) may be selected to propose a rollup with a high amount of MEV + fees from Rollup (B). blocks. From Rollup B's point of view, they essentially miss out on some value that, in the isolated approach, they would keep for themselves.

Addressing interoperability tradeoffs

Another note on the trade-offs presented by interoperability, and another way to solve some of the problems follows:

The purpose of the shared orderer network is to provide a canonical guarantee for multiple chains, which is obviously a big advantage in this case. This can be combined with a mechanism to guarantee efficient state transitions between Rollups. This could be a committee-based approach (e.g. PoS), secured proofs (the Optimistic approach), or our preferred one - a ZKP backed by committee signatures. Because the shared sorters are "lazy", they only create super blocks to sort the transactions of multiple Rollups, and the specific transaction execution is left to a specific Rollup. Proofs of state (i.e. Lagrange, Axiom or Herodotus, etc.) are all possible solutions for potentially obtaining proofs of finality across sovereign rollups. You can even add proofs of economically guaranteed finality through things like staking pools, EigenLayr, etc. The basic idea is that a shared sorter provides an economic guarantee of ordering, and the validity proofs generated from this sorting are deterministic. The basic idea is that Rollups can execute transactions synchronously with each other. For example, a network of two Rollup nodes can conditionally know that both Rollup histories are valid, via ZKP and available data (data published to an efficient DA layer). By publishing a single Rollup block prefix from both networks A and B, a Rollup node can settle two Rollups simultaneously. One thing to point out is that conditional cross-rollup transactions consume resources from two separate systems through shared execution, so cross-rollup atomic (or synchronous) transactions are likely to be more expensive than single-rollup intra-transactions.

Succinct also covered cross-chain "atomic" transactions between Rollups with shared orderers (and shared fraud provers) within the Optimism hyperchain ecosystem, which can be viewed here. Also, as Polymer's Bo Du puts it: "Cross-chain atomic transactions are like acquiring locks between database shards on write".

The Future of Vertical Integration

The possible inner workings of SUAVE chains have already been explored in depth by Jon Charbonneau et al, so we won't go into too much detail. If you want a more detailed description, you can check out his article. Nonetheless, we think vertical integration does deserve a separate introduction, both to highlight how modular we can be (and why) and to introduce some of the issues and concerns associated with vertical integration.

While Astria, Espresso, and Radius' current shared orderer scheme is very modular, orderers still act as builders and block producers (although in Astria's case, they do not execute transactions). Astria has also been actively building PBS into its architecture from the start.

If PBS is not built into the protocol, there are several ways to implement PBS (albeit with varying degrees of decentralization). Products like SUAVE, use offline models like MEV-Boost, or implement builder modules such as the Cosmos SDK modules built by Mekatek and Skip.

It's worth noting that none of these methods are mutually exclusive. You have the flexibility to use several different methods and let anyone express their preference. That way, you can have executors competing to fill those vacancies. Adding more options is always good (and consistent with our belief in modularity). Still, different implementations will have different tradeoffs. For example, in a case like SUAVE, you can add privacy protection through SGX or Crypto technology, and add Crypto economic security to the truth, instead of relying on a fully trusted centralized PBS builder. (Thanks to Jon Charbonneau for his feedback here).

Vertical integration into the builder chain ensures fairness without taking shortcuts, adding latency and degrading performance. Therefore, the builder chain needs to be highly optimized and may require expensive and powerful hardware (leading to centralization). This means that to get end user validation we'd probably need some sort of light nodes (although they'd have to trust full nodes), or utilize a proof of state type setup to ensure the chain and users have proof that the bid preferences are filled and the blocks are correct Construct.

A chain like this may be very state heavy (we want to avoid this), although these state heavy transactions will be prioritized via smart contracts. In the case of a priority bid, it either gets filled or it doesn't get filled (for a short period of time), since bids are usually only valid for a short period of time, depending on preference. This means we may be able to implement very efficient (and early) state expiration for bids - which will allow us to prune the data and keep the chain "clean". This expiration date needs to be long enough to still allow bids to be filled first, but lowering it too much would make it nearly impossible to achieve forward blockspace futures. We don't need to update and retrieve expired bid contracts as they don't need to exist forever (unlike applications) - this can be done either by providing state/storage proofs when bids are filled or by DAS storage solutions (such as proposed by Joachim Neu solution) to make it more "safe" and trustworthy.

As mentioned earlier, the need to verify the "authenticity" of SUAVE may be limited to the "jusha" (advanced users) of the platform, because most users and customers of SUAVE can obtain high economic benefits from it. This might push us to only let people run full nodes if they want to validate - although this excludes the vast majority of people (you could say they don't need to validate). This is (in our opinion) the opposite of Crypto, where we would prefer to implement "trustless" SUAVE verification through state proofs or light client friendly implementations.

The reason this is needed is that you want to verify that your bid priority was filled correctly and that the block was filled with the correct information when paying (to avoid rebundling and other bugs). This is essentially an oracle problem - in fact it can be solved with a proof of state (as is the case with all SUAVEs). However, these state proofs bring up another problem when crossing chains, how to relay this information across chains in a way that it has not been tampered with or concealed? This may require going through a strong economic finality (such as the one proposed by Lagrangian), in which case you can use EigenLayr's restaking validators to prove the finality and authenticity of the chain, and have very Strong economic constraints. This then brings up another problem (e.g. the bidding contract stipulates that the "oracle machine" - in this case the re-pledger - has designated the pledged Token and provided economic binding - but how do we make a consensus between External slashing this? While you can write slashing criteria, this is not in consensus, which means social slashing will be exploited via smart contracts (which is almost never "fair" and can cause problems). This is currently the case at EigenLayr One of the bigger problems with forfeiting.

So, where does this leave us? Possibly, until we get on-chain "trustless" slashing beyond consensus, chains like SUAVE may need their own consensus algorithms and Cryptoeconomic security to prove bid preferences and build block certainty— However, this means adding more cryptoeconomic attack vectors, especially if the Rollup value of its building blocks is much higher than its own cryptoeconomic security.

In addition, there is still a lot of design space in SUAVE-type chains and cross-domain MEVs. Here are some possible research directions:

  • Intent matching and intent-based systems
  • Convex optimization in multi-asset trading
  • DSL
  • MEV reallocation
  • Delayed war
  • Scaling problem with a single set of actors but needs to be built for multiple Rollup state machines
  • Preference expression

Regarding preference expression, to interact with a smart contract in the EVM, it is necessary to send a contract call (message) to a specific function at the address of the deployed code containing the execution instruction. While users provide input, they may not have control over output due to possible statefulness.

In contrast, preference expression design systems (such as SUAVE and Anoma) only require users to sign preferences with a bond, which is paid to builders and block producers if the searcher's preferences are met. For complex combinatorial logic, such as transaction sequences for MEV searchers and builders, different languages and virtual machines may need to be implemented. This is a new design space that has received a lot of attention lately, especially the Anoma architecture. Also, we highly recommend this short article.

View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Share
Comment
0/400
No comments
Trade Crypto Anywhere Anytime
qrCode
Scan to download Gate app
Community
English
  • 简体中文
  • English
  • Tiếng Việt
  • 繁體中文
  • Español
  • Русский
  • Français (Afrique)
  • Português (Portugal)
  • Bahasa Indonesia
  • 日本語
  • بالعربية
  • Українська
  • Português (Brasil)