The Blob Merger - Our ETH Global project

Hi everyone, I would like to introduce a project we worked on during ETH Global Istanbul to you. We call it the Blob Merger, a solution designed to optimize blob data usage on Ethereum after EIP-4844 will be implemented. Since blobs maintain a fixed size, not every user utilizes the entire capacity. Our solution processes submitted blob data, creating efficient blobs that maximize available space.

What is the Blob Merger about
We are building on the Suave blockchain, and our SUAPP consists of two components. First, a precompile responsible for merging the supplied data. Second, a regular app contract that manages the connection between users (typically rollups) and block builders. Users can submit their blob data to the app contract on the Suave blockchain, and builders can use the contract to receive optimized bundles.

You can find more information on the ETH Global page: Blob Merger Showcase or GitHub page.

Future plans
However, during the 36 hours of the hackathon, we only created a concept of the Blob Merger that will be completed in the near future. To take blob merger from a hackathon project into an actual app that is usable by the rollups, the next steps need to be taken:

  • Polished SUAPP: During the hackathon, we’ve managed to create and write tests only for the precompile. There remains work on the SUAPP contract, tests for the contract, and e2e tests for the app.
  • Test on the testnet: The Blob Merger is tested on the local blockchain. Once the SUAPP is polished (see step one), we need to test it on the testnet as well.
  • Fees: Sending the request to merge blobs requires payment of a transaction fee. To ensure fairness, we propose that the fees be divided among entities that have data in the merged blob, in a ratio corresponding to the data volume. In the event of higher traffic on the network, there will be a greater demand for space in the blob. For such cases, we are considering implementing an auction system so that the rollup can choose whether to wait or pay a premium to expedite the processing of their data. The payment mechanism itself could be resolved through a deposit, but this is still to be discussed for the production version.
  • Splitting the data: Currently, the merger takes blobs and tries to fit them into one merged blob without splitting it into more blobs. So, when we have a merged blob of 117 kB, there is still 11 kB remaining. However, if there isn’t any blob sent by the rollup with the calldata size of 11 or smaller, it remains empty. For greater effectiveness, we suggest splitting the data between merged blobs. For example, if there are merged blobs of sizes 117 kB and 123 kB, and in the stack is still the rollup blob of size 13 kB, 11 kB will be added to the first blob, making it 128 kB in size, and the remaining 2 kB to the second blob, making it 125 kB in size.
  • Communication back to the user: The future blob merger should provide a way for the rollups to know if their data was included in a blob.
  • Data expiration: In the future version we would like to add the possibility of expiration of the submitted data and also the ability for the rollups to cancel their request for specific data to be included in a blob.
  • Explorer: We realize there will be a need for a blob explorer as well. We plan to check existing solutions or create our own.
  • Standardization: Write down a standard for how the blob merging should function.

If you have any thoughts or ideas on how to improve the current state or the next steps, we will be happy to discuss it with you. Finally, we would like to thank all the people who helped us during the hackathon, especially Dmarz and the Flashbots team!

4 Likes

This is really interesting, thanks for sharing.

I wonder what the best way to represent the “demand” from blob providers are in this model. If you have more data than space in a blob it makes sense to spread that data across multiple blocks, as you outlined before. However, there might be blob providers who want to be included sooner rather than later even if they are using the merger.

One solution would be to represent blob demand as a curve over time, where the amount you are willing to pay escalates every block. So a blob provider would be willing to pay a fee of N to be included in block 1, then N+5 in block 2, N+10 in block 3, and so forth. Then the job of the blob merger is simple, you just choose the highest paying blobs right now.

It does create a fee market for entry in the merged blobs, but I think it’s an efficient one because blob providers get included at the earliest and cheapest opportunity possible.

This is very cool… Did you look into saving for major rollups this would create say based on historical data?

Thank you so much for your comment. I believe the curve you mentioned could indeed address the priority problem. In my view, we might want to develop it as “first come, first serve” for the initial version, just to demonstrate its functionality, but prioritize implementation right after.

Thank you! :pray: We haven’t tried to calculate it, but we anticipate that smaller rollups will utilize the blob merger more. This is because major rollups might be capable of filling up almost the entire blob on their own. Therefore, the blob merger will prove useful for major rollups as well once we can divide the data into multiple blobs.