FRP-24: Quantifying MEV on L2s

Following up after the reviewing the code, an earlier version of inspect used to rely on uniswap for historical prices but in the current version, we first define the tokens we’re interested in here and then use coingecko to fetch the prices here.

Another service of ours internally relies on uniswap (which is definitely the way to go, given the issues with daily prices that u highlighted but also to support arbitrary tokens), reference code.

Will talk to the data team about a more robust price source (vs daily prices from coingecko), interested in what you end up doing too. Because we do: SELECT usd_price FROM prices WHERE token_address = a.profit_token_address AND timestamp <= b.block_timestamp ORDER BY timestamp DESC LIMIT 1 when mev_summary is generated, we can append more granular prices and regenerate a more accurate summary without doing a full backfill


Thank you very much. I worked on integrating the oracle prices into the profit analysis, it works fine on Polygon with Quickswap. More granularity, better precision, which will yield to a better analysis.
An update on where we are:

  • @0xpanoramix has been running some benchmarks for scaling the application
  • we now use a better source of prices, I just need to work on the concurrence (with asyncio)
  • I have also added a few metric functions
  • then we can run the whole history from block 0 on Polygon
  • and need to integrate the latest changes into Optimsim and Arbitrum versions
1 Like

Hello, has there been any updates with this research specifically as it relates to Arbitrum and Optimism?

Then at a higher level, what is the rough scale of MEV on these L2s? For instance, could a searcher expect to be more or less profitable on an L2 as opposed to mainnet


  • we did struggle with Polygon archival RPC calls for a while but it is now fixed.
  • we update the way we get prices and decimals to make it faster and more seamless
  • 500k blocks for Polygon ran successfully
  • benchmarks on block batch size and concurrency are done
  • we will be launching the full Polygon historical analysis in the coming days if everything is alright

We fix more edge cases errors and optimised even more for speed the whole analysis and finally launched it from block 0.
Somehow mev-inspect-py got stuck and froze, with no error. So we’re trying to collect the profit from successfully analysed blocks and relaunch the whole analysis for missing blocks.


We are done estimating MEV on Polygon. We are going to put the results here some time next week. We are focused on trying to run Arbitrum and Optimism archival nodes right now as Pocket Network does not provide endpoints.

In the mean time, please give your estimate of MEV on Polygon:

What is your estimate of total MEV profit (USD) to date on Polygon?
(Only including arbs and liquidations)

  • 0 - 25M
  • 25M - 50M
  • 50M - 100M
  • 100M - 200M
  • 200M - 500M
  • 500M - 1Bn
  • .>1Bn

0 voters


A long time without update from us. We have been checking and rechecking that all our results and metrics make sense for Polygon. We will post them here later this week.
Also for Arbitrum and Optimism, Pocket Network does not provide archive node endpoints and we have been talking with different providers but the pricing is quite high (> $3k for an archive node at chainstack for example), so we have figured out that it would be better to run our own nodes, and this would improve latency a lot as well. Indeed latency wise, we went from 250ms to 2ms for a request so way faster. The issue is that it takes a very long time for these archive nodes to sync up (days really) and memory requirements are increasing faster than the doc gets updated… We are now waiting for our OP node to sync up before running the analysis.


Thanks for all the updates here :slight_smile: really appreciate them!

how are you deploying these nodes? your own on-prem hardware or managed public cloud? or something in-between? asking given that is a dramatic latency improvement

feel free to DM if sensitive!

Thanks for the update @Eru_Iluvatar :slight_smile:

This is very interesting to me. I’m more interested in highlighting the challenges and a first iteration of the code, than on the final numbers. So let’s think about how to proceed here, since the grant scope was for a couple of months of work, and we don’t want to take advantage of the curiosity and commitment of our contributors.

My guess is that both of you would like to include in the final report the numeric analysis of the inspected blocks. I think it’s ok to continue if this phase is mostly idle waiting for nodes to sync. This would make the FRP super cool, of course, and would include a very valuable summary of your experience running the nodes. But if you are feeling this is already taking too long, a good result for me is to close here and report your findings so far. This would be cool enough, and seed future work.

Intermediate alternatives that come to mind would be to partner with one of those data providers and give them credits on the report, or maybe reach out to organizations in those ecosystems that are already running archive nodes and would like to collaborate. Or Flashbots can pay for the access to the nodes to complete the task.

What do you think? What would be your preferred path to give closure to this FRP?

This is very interesting to me. I’m more interested in highlighting the challenges and a first iteration of the code, than on the final numbers.

We’re already dedicating a part of the paper to the scaling of the software, from the limitations we faced, and how we improved things, the benchmark we ran etc. It is quite useful for future use of mev-inspect-py.

We are only waiting for the OP node to sync up now, before running the analysis there. Polygon and Arbitrum are done, so I think it’s fine to just wait for the OP node to sync up.


We did some analysis and tried to reproduce some of our results before publishing the Polygon results and we could not… So we spent a lot of time trying to find the root cause of the issue and finally found a bug in the way we retry to get prices on failure, which was at the origin of the whole issue. This took us a long time to figure out and is now fixed. Sorry for the delay in giving an update, it was quite frustrating and long to find out the root cause, which was not obvious at all.

Obviously the results we had come to before are wrong so we will need to relaunch the whole analysis for the 3 chains.
It is currently running for Optimism. Then we will run it for Polygon and then Arbitrum.


Alright, after weeks of reviewing the whole methodology and results and re-running everything, here we are!
I am excited to announce that below are the results for Optimism and Arbitrum. Polygon will come a bit later.

NB 1: Note that this is just a summary, more details are provided in the paper (particularly distributions, analysis of the top tokens, and number of transactions).

NB 2: I am very dubitative about posting numbers here without any explanation about the methodology or limitations etc as this can easily be subject to vehement critics. So I am going to start with this. Also note that we have a dump of the DB in case someone is interested in reproducing the results. We are also humans and error prone, so feel free to double check, give feedback etc. The code is public.

I. Methodology

We are using mev-inspect-py with Marlin’s modifications to analyze logs rather than traces, it is:
1/ way faster
2/ simpler because we look for swap events and do not need to add the ABI of all DEXes out there
We are focusing on classic atomic token arbitrages and as such our results are only representing one part of the extracted MEV, we do not include token sniping at launch, NFT sniping/arbitrage.
We are fetching prices from UniswapV2/V3/Quickswap/Sushiswap to get the USD profit at the time of the trade, meaning we are able to get block precision prices.

II. Limitations:

Note that fetching the price at the time of the block means that the computed profit does not always represent the realized profit, but rather the mark to market profit, as profit taken in non stablecoins tokens may quickly evaporate as prices move. Also note that block granularity is less precise than tx granularity, but this would require to replay all the transactions one by one within the block, something really long and cumbersome when looking at all blocks from block 0.
Also, we are getting the price of the token vs USDC, so if there is no pool of the token vs USDC, then we will exclude this arbitrage tx from the analysis. (A great improvement would be to look at token vs native chain token, i.e. ETH, MATIC, ARB, OPT).
We haven’t done any modification to the classification heuristics of mev-inspect-py as the goal is to get a good baseline for comparison with Ethereum results that can be found on It is to note, however, that the classification methodology are flawed and could be improved (looking at eigenPhi’s methodology for example).

III. Results for Optimism

First let’s look at the number of MEV transactions over time:

And now the historical MEV profit over time:

Now, in terms of mean and median MEV profit (USD) by tx, day and block:
(we did some analysis for all days/blocks and for days/blocks that have MEV in them, the reason is that most of the blocks at the beginning do not have no MEV so they skew the data).

III. Results for Arbitrum

First let’s look at the number of MEV transactions over time:

And now the historical MEV profit over time:

Now, in terms of mean and median MEV profit (USD) by tx, day and block:

(we did some analysis for all days/blocks and for days/blocks that have MEV in them, the reason is that most of the blocks at the beginning do not have no MEV so they skew the data).


Thanks for your great effort. I was inspired by your initial proposal and then started investing on MEV on L2s using Dune Analytics. Interestingly, I got results almost at the same time you post this report. The result dashboard can be found here.

Methodology I’ve used was much simpler and less exhaustive than what you implemented, but I guess this will help tracking down the MEV activities in L2s continuously.
Hope this helps and feedbacks are welcomed!


Did I miss the Polygon numbers, or still processing?
Appreciate all your efforts!