FRP-24: Quantifying MEV on L2s

Some people have mentioned spamming so this report on a spamming strategy on polygon may be interesting:

2 Likes

This is fascinating, thanks for sharing @Quintus. I wonder if there’s a cutoff point after which this type of strategy (more like spam & griefing) isn’t really economic anymore?

For some quick reference stats, here’s the USD-denominated gas fees for a simple WETH<>USDC swap on uni v3 across various chains at the time of writing this post:

  • Ethereum L1 = $1.94
  • Polygon PoS = <$0.01
  • Optimism = $0.09
  • Arbitrum One = $0.59

Solana DEX spot swaps are also effectively priced as “gas” free.

Suppose this does make the case for setting minimum (non-zero) base fees.

2 Likes

FYI we have started working on the topic with @0xpanoramix (FRP 24)
We have a public repo, feel free to check/comment on our work.
We are now actively working on having a clean branch for Polygon while doing some trials on Arbitrum and researching a way to scale the historical chain analysis.

5 Likes

Very cool, thank you for sharing it here!

1 Like

regarding the historical chain analysis, you might be interested in substreams - https://substreams.streamingfast.io/

the tldr is that substreams is a novel technological innovation from The Graph and does a bunch of incredible things. Here is a list of chains they have built support for

here is what you can get from substreams

2 Likes

Update on the FRP 24:

  • We are now saving all mev txs to a newly added db
  • We have managed to add a module that takes care of all the profit analysis (retrieving historical prices, computing the usd profit and running some other analysis)
  • Improving on Marlin Protocol’s version that get the tx data from the logs rather than the traces works great for Polygon (200k+ blocks analysed)
  • We have started running mev-inspect-py for Arbitrum but it is faster to go with the logs so that’s what we are doing
  • We have a PoC for Optimism using the logs
  • We have also been working on scaling, indeed, with 38M blocks for Polygon it is going to take quite some time to do the whole history. One thing is batching the number of requests, the other is to potentially run local nodes to reduce the latency of the requests. Ofc the hardware part helps as well, we should have some interesting benchmark to provide here in the coming weeks.
  • One bottleneck is the RPC provider, we quickly hit limit rates unless we use Pocket Network but it is often down. I think we will implement a rotation of RPC providers.

As always, please feel free to have a look at what we do on the repo, the branches of interest are master-polygon, master-arbitrum and master-optimism. Don’t hesitate to give us feedback and ideas.

Thank you @0xEvan but I think they do not support none of the blockchains we are diving into here. Would be interesting to look at other chains though. Maybe a future FRP :slight_smile:

2 Likes

The current plan is:

  • Finding a reliable source of prices (Coingecko and CryptoCompare only provide hourly data up to 90 days, we need historical hourly data since inception). If you have anything to recommend, that would help
  • Running the analysis on Polygon for a long period (500k blocks) and seeing if we face any issues → fixing and iterating + measuring the time it takes and other benchmarking
  • Once the above is done for Polygon, we want to do it for Arbitrum and Optimism
  • Then running the history since inception to have the final data to write the paper
1 Like

Marlin and Fastlane are two projects that may be helpful + potentially want to collaborate with you on this research. Not sure if you’re involved with either of those, but figured I would mention if helpful!

Seems like you’re already working somewhat with Marlin:

Here is a list of the metrics we are looking at for each chain:

  • usd profit by block: time series + average + median + distribution
  • usd profit by day: time series + average + median + distribution
  • nb of txs by block: time series + average + median + distribution
  • nb of txs by day: time series + average + median + distribution
  • what are the top 10 tokens profit is taken in? How does it distribute?

What else would you guys love to see?

I come with some price information from the team…

@alex: i believe we use Uniswap oracle data for the MEV-Explore dashboard! (so, per block)
cc @taarushv

@Tina: I don’t know what our data team’s solution is currently, but in the past (not in Flashbots) Amberdata had provided me with great price feeds for my mining research.

@elainehu: yes amberdata will have this data. I use chainlink Data Feeds API Reference | Chainlink Documentation since it’s aggregated price not just from a single dex.

1 Like

Following up after the reviewing the code, an earlier version of inspect used to rely on uniswap for historical prices but in the current version, we first define the tokens we’re interested in here and then use coingecko to fetch the prices here.

Another service of ours internally relies on uniswap (which is definitely the way to go, given the issues with daily prices that u highlighted but also to support arbitrary tokens), reference code.

Will talk to the data team about a more robust price source (vs daily prices from coingecko), interested in what you end up doing too. Because we do: SELECT usd_price FROM prices WHERE token_address = a.profit_token_address AND timestamp <= b.block_timestamp ORDER BY timestamp DESC LIMIT 1 when mev_summary is generated, we can append more granular prices and regenerate a more accurate summary without doing a full backfill

2 Likes

Hello,
Thank you very much. I worked on integrating the oracle prices into the profit analysis, it works fine on Polygon with Quickswap. More granularity, better precision, which will yield to a better analysis.
An update on where we are:

  • @0xpanoramix has been running some benchmarks for scaling the application
  • we now use a better source of prices, I just need to work on the concurrence (with asyncio)
  • I have also added a few metric functions
  • then we can run the whole history from block 0 on Polygon
  • and need to integrate the latest changes into Optimsim and Arbitrum versions
1 Like

Hello, has there been any updates with this research specifically as it relates to Arbitrum and Optimism?

Then at a higher level, what is the rough scale of MEV on these L2s? For instance, could a searcher expect to be more or less profitable on an L2 as opposed to mainnet

Update:

  • we did struggle with Polygon archival RPC calls for a while but it is now fixed.
  • we update the way we get prices and decimals to make it faster and more seamless
  • 500k blocks for Polygon ran successfully
  • benchmarks on block batch size and concurrency are done
  • we will be launching the full Polygon historical analysis in the coming days if everything is alright
3 Likes

We fix more edge cases errors and optimised even more for speed the whole analysis and finally launched it from block 0.
Somehow mev-inspect-py got stuck and froze, with no error. So we’re trying to collect the profit from successfully analysed blocks and relaunch the whole analysis for missing blocks.

2 Likes

We are done estimating MEV on Polygon. We are going to put the results here some time next week. We are focused on trying to run Arbitrum and Optimism archival nodes right now as Pocket Network does not provide endpoints.

In the mean time, please give your estimate of MEV on Polygon:

What is your estimate of total MEV profit (USD) to date on Polygon?
(Only including arbs and liquidations)

  • 0 - 25M
  • 25M - 50M
  • 50M - 100M
  • 100M - 200M
  • 200M - 500M
  • 500M - 1Bn
  • .>1Bn

0 voters

5 Likes

Hello,
A long time without update from us. We have been checking and rechecking that all our results and metrics make sense for Polygon. We will post them here later this week.
Also for Arbitrum and Optimism, Pocket Network does not provide archive node endpoints and we have been talking with different providers but the pricing is quite high (> $3k for an archive node at chainstack for example), so we have figured out that it would be better to run our own nodes, and this would improve latency a lot as well. Indeed latency wise, we went from 250ms to 2ms for a request so way faster. The issue is that it takes a very long time for these archive nodes to sync up (days really) and memory requirements are increasing faster than the doc gets updated… We are now waiting for our OP node to sync up before running the analysis.

4 Likes

Thanks for all the updates here :slight_smile: really appreciate them!

how are you deploying these nodes? your own on-prem hardware or managed public cloud? or something in-between? asking given that is a dramatic latency improvement

feel free to DM if sensitive!

Thanks for the update @Eru_Iluvatar :slight_smile:

This is very interesting to me. I’m more interested in highlighting the challenges and a first iteration of the code, than on the final numbers. So let’s think about how to proceed here, since the grant scope was for a couple of months of work, and we don’t want to take advantage of the curiosity and commitment of our contributors.

My guess is that both of you would like to include in the final report the numeric analysis of the inspected blocks. I think it’s ok to continue if this phase is mostly idle waiting for nodes to sync. This would make the FRP super cool, of course, and would include a very valuable summary of your experience running the nodes. But if you are feeling this is already taking too long, a good result for me is to close here and report your findings so far. This would be cool enough, and seed future work.

Intermediate alternatives that come to mind would be to partner with one of those data providers and give them credits on the report, or maybe reach out to organizations in those ecosystems that are already running archive nodes and would like to collaborate. Or Flashbots can pay for the access to the nodes to complete the task.

What do you think? What would be your preferred path to give closure to this FRP?