FRP-25: Quantifying Fairness Granularity For First-Come-First-Serve (FCFS) Chains

Title: Quantifying Fairness Granularity For FCFS Chains
Team: Sahar, Emmanuel Awosika
Flashbots contact: @ra
Created: 2022-09-18
Status: Completed
Github: https://github.com/flashbots/mev-research/blob/main/FRPs/active/FRP-25.md


Quantifying Fairness Granularity For First-Come-First-Serve (FCFS) Chains

Background and Problem Statement

L2 solutions have different roles in their systems: sequencer, executor, aggregator, and challenger [1]. Also, transaction ordering and execution can be separated in rollup protocols but in practice, they can be gathered in a single node; however, the role of the sequencers themselves in ordering and censoring transactions can cause potential MEVs [2]. Moreover, I believe that a single sequencer may cause some problems. Not only it is logically under the control of a single authority and we should trust it not to frontrun, but also technically it could be a single point of failure (as it has already happened before to Arbitrum). One of the ideas is to have decentralized permissioned sequencer nodes, which we will discuss and investigate its challenges during this research. It is worth mentioning that Arbitrum stated that it intends to move toward decentralized sequencing. It means that eventually the task of transaction ordering will be given to a distributed committee of sequencers which comes to a consensus on ordering [3].

Some rollups (e.g., Arbitrum/Optimism) use some form of the first-come-first-serve (FCFS) ordering policy that guarantees transactions (sent to the sequencer’s private mempool) are ordered as received, so that reduces MEV significantly. but it may have some consequences. Using the FCFS ordering strategy can increase the number of spam transactions to guarantee their early inclusion in the batch, and hence, it can lead to a waste of block space. On the other hand, in the FCFS approach, any transaction that enters the queue first is processed first. As a result, the network latency would be a critical issue. More specifically, it would not be fair for users with high network latency and far from the sequencer’s node as their transactions will arrive in the queue later than the others. It seems the issues mentioned will still not be resolved using decentralized sequencing with the FCFS ordering policy. Finally, to the best of my knowledge, the FCFS itself cannot prevent censoring transactions as a malicious sequencer can still prevent a transaction to include in a batch. (Arbitrum/Optimism uses some force transaction inclusion mechanisms to bypass the sequencer).

Our work is focus on FCFS ordering relies on a “fairness granularity”, i.e., the interval during which all transactions received are assumed to occur at the same time. As seen in the Themis paper [4], receive-order fairness (i.e., executing transactions strictly according to timestamps) requires using the finest possible fairness granularity (i.e., granularity is zero or close to zero).

However, we know a low fairness granularity has negative second-order effects, such as encouraging MEV searchers to collude and colocate with sequencers/validators to achieve optimal ordering (e.g., to execute frontrunning). We also know that the fairness granularity must not be arbitrarily large, or else the possibility of an adversary frontrunning a transaction increases.

This research project aims to quantify the “burst period” on FCFS chains as the basis of developing a better fairness granularity for fair-ordering that achieves “batch-order fairness” (i.e., transactions in the same interval receive similar timestamps). We consider the burst period as the average time between the occurrences of two trades.

Plan and Deliverables

The plan will focus on fair ordering approaches in rollup chains to reduce MEV. We try to investigate and solve some challenges on FCFS policy as one of the current promising ordering strategies. Here are the main objectives:

  1. Quantify the value of the “burst period” by evaluating the average time between the occurrences of two trades on AMM and order book DEXs on an FCFS chain (Arbitrum).
  2. As a second indicator for the granularity interval, the “average time period” of the maximum MEV load in each block can be quantified (using the same data from #1).
  3. Obtain the optimal fairness granularity for fair-ordering that incorporates elements of a frequent batch auction (i.e., FBA-FCFS). We aim to achieve this objective by applying the calculated “burst period” and the “average time period” to develop a reasonable interval for batch auctions in FCFS chains.

Project Deliverables:

  • A formal research paper discussing results and evaluations
  • Blog post targeted at non-technical audiences

References

  1. Validating Bridges as a Scaling Solution for Blockchains
  2. MEV and Me
  3. The Sequencer and Censorship Resistance
  4. Themis

Hi everyone, our work on FRP-25 is completed. This post provides a summary of our work on computing Arbitrum’s burst period and creating an ordering policy that mitigates drawbacks of classic First-Come-First-Served (FCFS) ordering in rollups. The full paper—which discusses our results in more detail—will be published later and shared on the forum for feedback and review:

Contributions

We proposed an algorithm to mitigate negative issues associated with MEV extraction on rollup chains using the FCFS ordering policy—particularly, adversarial attempts to manipulate sequencer orderings (via transaction spamming), and unfair ordering owing to variances in network latency experienced by users. The proposed algorithm is based on the concept of a fairness granularity estimate through a statistical approach. We also conducted experiments to assess the proposed algorithm’s accuracy in fairly ordering transactions.

The burst period—defined as the average interval between two network transactions—has been previously identified as a suitable indicator for the fairness granularity on FCFS chains like Arbitrum by Sysxun et al. Our main contribution is calculating Arbitrum’s burst period more formally (vs. using informal heuristics) and employing historical data to arrive at an estimate that reflects real-world network conditions.

The fairness granularity is a crucial aspect of our algorithm’s fairness property: it defines the interval during which transactions received by a sequencer are assumed to occur simultaneously. For context: the algorithm is implemented in a centralized (single-sequencer) with the following key details:

  • The sequencer divides a batch of transactions received within the sequencing timeline into granularity intervals (aka., slots).
  • Transactions in each interval are considered to happen at the same time.
  • A specific selection mechanism is adopted to determine priority of transactions within each slot.

Determining the optimal granularity interval requires striking a balance between fairness and efficiency in transaction ordering: an extremely small fairness granularity resembles classic FCFS ordering (and displays similar drawbacks), while an excessively large granularity parameter leads to a time-consuming ordering process within intervals.

Experiments and results

Accurate estimation of Arbitrum’s burst period is hindered by the imprecise nature of block timestamps (which are represented as Unix seconds) and the centralized nature of the sequencer node. To address these limitations, we created a synthetic dataset with precise receive-times for transactions (in milliseconds) using a statistical distribution model.

The statistical distribution model employed parameters calculated based on Arbitrum’s average blocktime dataset, and we used it to generate one million records of transactions with their corresponding receive-times. Thereafter, we calculated the average interval between transactions in the generated dataset to estimate the burst period (in milliseconds) for Arbitrum.

Evaluation/conclusion

We adopted a similar evaluation methodology as adopted by Kelkar et al. in their work on the Themis fair-ordering protocol. Here, we focus on measuring how closely the final transaction ordering derived from our algorithm aligns with the ideal ordering, which is derived by arranging transactions based on (client-side) generation times instead of (sequencer-side) receive-times.

More concretely, the evaluation procedure involves comparing the final sequence of transactions with their original order at the time of generation for transaction pairs. This allows us to quantify the degree of deviation from the ideal ordering and assess the accuracy of the proposed algorithm in maintaining chronological order of transactions.

In the evaluation, our algorithm achieved an accuracy of ~70% for the Arbitrum One network. This suggests that over 70% of transaction pairs are sequenced based on their generation times—meaning they align with the ideal order—even in scenarios involving transactions with varying network delays.

The fair ordering algorithm is adaptable to other blockchains but requires specific granularity calculations for each network. While it doesn’t eliminate issues associated with classic FCFS-based ordering issues entirely, it mitigates them significantly and improves fairness in transaction ordering across network latency scenarios. The algorithm can also be applied in a distributed setting for each sequencer to create its local ordering and send it to the leader (who aggregates ordering preferences from replica nodes to create a final ordering).

4 Likes

I’m pleased to share that our final paper has been published at the IEEE Blockchain Conference 2024.

2 Likes