MEV-Boost Community Call #1 - 9 Mar 2023

Pt 2.

  • Chris Hager: I just wanted to ask Potuz what is your opinion as you have been commenting on PRs too with some strong opinions and I would love to hear you maybe outline your way of thinking about this.

  • Potuz: I have a personal and Arbitrum different opinion on this. So let me tell you what I think. If we didn’t have PBS at all no FB or Mev-Boost and I wanted to submit a block with a tx someone didn’t want to be on chain then unless you hold a lot of stake you cannot fork it out. You either are the next proposer with a lot of stake and try to fork it out or you hold a lot of stake to be able to fork out several blocks in a row. With FB now, the soft PBS we have at this moment, the realer can cheat you. The relay can tell you I am going to produce for you a block not trigger w/e form you have for going back to local execution. For example, we would go back to local execution If the local payload would pay us more. So if there is a tx that is censored but its paying a lot and the builder wants to censor it, it won’t be able to because the relay checks that the bid has to be higher than the local block. But the Relay can lie to us, and the relay can not produce the block and then we ar replacing our trust assumption with the relay. With optimistic relay this trust assumption goes to the builder.

  • This is a killer for many reasons because now any builder that wants to censor a tx will just have to steak 1 eth and trigger w/e salvage we have on our clients to fall back to local execution, he will produce an invalid block, slash himself 1 eth and eclipse a block that would not be possible otherwise. This kills a lot of statistical analysis we want to do on chain for an L1. Now speaking as an employee at an L2 why we want to have this and rollups need this. Yes you could do that one off and let me tell you why one off is already good enough. A simple statistical analysis says that if you hold 60% or less of the stake which is a lot. That means 60% of ppl are willing to fork blocks and 90% of people are willing to censor. These are strong numbers. Even under those circumstances in less than half an hour only 10% of people are not censoring and 40% of people are not willing to fork. In those circumstances you will see two blocks in a row missing in a period of less than a half an hour. This is w/o optimistic relay. With optimistic relay this does not need to happen because the builder will slash himself one eth and produce another builder and slash himself one eth.

  • The problem with this is that If a rollup wants to reduce challenge period to say half an hour, then the sequencer himself can actually produce himself as a builder, slash himself 1 eth and take out billions from the rollup. It moves trust assumptions from relay at this moment to builder and this is unacceptable.

  • Justin Drake: I don’t completely understand because if a builder wants to censor one block the next block they just need to pay a bribe to be the top bid and then they can make an empty block or w/e. Or even better they create a block that does extracts mev but doesn’t include the one tx they want to censor. It just seems very sub-optimal to create a bad block.

  • Potuz: Right but they need to avoid triggering the local execution changes that we want. For example if we had inclusion lists on FB then we would want them to be enforced, that would be one thing. The other thing, we see a local payload is paying a lot then the bidder is going to have to pay more than w/e the local payload is paying.

  • Justin Drake: Okay, so you are assuming a world where there is some sort of inclusion list that is implemented.

  • Potuz: That would be optimal, but even without inclusion list even this will work. I want to fall back whenever the bid is not high enough. This will force the builder to pay as high as the censoring tax is willing to pay. At least this much.

  • Justin Drake: That is already the case no?

  • Potuz: Now if the. Builder doesn’t want to pay this he cannot get away with this but with your system the builder is have to pay up to one eth. Because he could just lie about the bid and not produce any block and since the block has not been checked, this puts a bound on the number he has to pay.

  • Justin Drake: I don’t completely understand. Even with optimistic relaying if the censored tx is willing to pay x eth to be included then the bid needs to be at least > xETH, that doesn’t change with optimistic relaying.

  • Potuz: With op relaying the builder can send you the bid and say I will pay 32 eth and never pay it.

  • Justin Drake: No that doesn’t work. And the reason is because if X> 1 eth, 1 eth being the amount of collateral we are capping at at the moment then we fall back to the full simulation.

  • Potuz: This is better we are putting a cap on the builders bid.

  • Justin Drake: This is necessary if someone bids 1 million eth we need a guarantee they can pay 1 million eth.

  • Potuz: At least this kills that and it leaves me with the point that it’s impossible to implement inclusion lists but at least it kills one of the grieving vectors. But you see my endpoint is with this systems a builder can eclipse a block, which I’m hoping we eventually have systems where a builder cannot arbitrarily prevent a block from being produced.

  • Justin Drake: Right I understand your endgame and I understand your use case as a Layer 2 optimistic rollup. My understanding is Arbitrum has 7 day challenge period and even if 10% of validators don’t use mev-boost you will get inclusion of fraud proof extremely quickly. I do understand your goal now and will think about it.

  • Potuz: Its not even for this, this is about creating a censorship on chain a censor oracle on chain. There is a big difference b/w 700-800 blocks missing assuming arbitrary forking to 130 blocks missing assuming not arbitrary forking. But also for concepts that will be very useful on chain, the notion of safe head also is much better without the trust assumption of builders vs. validators being able to eclipse arbitrary blocks. It’s easier to prove the last head was not workable without optimistic relay. These kind of things I think should be thought about more and there should be a public discussion and not in channels with 24 people like this one about optimistic relay. It should be the full community involved in this. It may decrease the censoring of OFAC txs and we might see Tornado Cash txs getting in earlier, but I think it prevents it doesn’t help with censorship where censoring particular txs may be more expensive.

  • Justin Drake: I don’t think it changes anything. If we were to have it tomorrow I don’t think it changes anything but happy to hear a counter argument for sure.

  • Stokes: This is a public call and anyone can attend so we are recording them to get them as widely distributed as possible. So I think this is a good forum for having these conversations. If there are other people you think should be involved or should see this feel free to direct them here.

  • Stokes: So we have about 30 min left. It sounds like there has been much thought on design, a rollout on Goerli, and a gradual scale up and we’ll see how it goes. It’s been a great conversation and there are other things to consider that’s why we are here to find those points and make that happen.

Relay Sustainability

  • Stokes: One other point to bring up is essentially around relay sustainability. Relays run this as a pure cost there is no funding model or sustainability model and many relay teams have reached out to me wanting to discuss this and brainstorm ways to make this sustainable in the long run. While we are working towards ePBS, it could very much be the case that we have mev-boost in place for some time. So it is definitely important. Are any relays on the call who want to chime in on this point?

  • Justin Drake: My POV is that I try to make assessment of the cost, I agree that it is a public good that needs to be funded by public goods funding and then the question is how much does it cost? My rough estimate is 100k per per year per relay. If we had 10 relay operators which we don’t, and let’s be extra conservative and say it takes 5 years to get to ePBS. $100,000/year * 10 relays * 5 years ~ $5 million, which in the grand scheme of things is not a large amount of public goods funding, especially if its a cost that is spread out over 5 years.

  • I think what we have seen, we already have funders, flashbots funding its own relay and effectively is providing PGF for its relay, same for ultrasound, Aestis, and agnostic. These could be different entities. For agnostic maybe it’s funded by Gnosis who has a large treasury. In some cases it’s individuals funding this and in some cases its institutions. I don’t think we have a public goods funding problem. Partly because the amounts are relatively small for our space, and because empirically we have seen for several months now, that these relays are being funded.

  • Stokes: I think that makes a lot of sense. Justin, do you have any thoughts on relays charging for their services? At that point we can move out of the domain of public goods and have this be more of a very typical business, charge for API access or something like this?

  • Justin Drake: Yeah I think it’s very difficult because there is this race to zero in terms of fees and if you start charging then you have fewer builders and you start losing. And I think some of the relay operators don’t want to charge. We don’t want to charge, FB doesn’t want to charge, then it becomes very difficult to compete. I also think that going through the work of being a financial institution to the previous point of AML/KYC bc you start receiving funds from these unknown builders, I think this is a much more relevant point than the collateral which is just meant to be sitting there and returned after a period of time, not going to third parties and being returned to the sender. Also just the engineering work of setting up everything sounds not worth the effort. And also from our perspective the ultrasound project is about building public goods. It’s not really in our ethos to be charging for this.

  • Stokes: Yeah I think that all makes sense. To that extent that different organizations want to provide this as a public good then there will be relay options that do not charge and it does enforce a ceiling almost 0 of what a relay could charge. That being said I think we want to keep an eye on how things evolve because I think we get to a place where there are some relays today that operate as a for profit another arm of their business and we get to a place where they say this is not profitable. This means there are fewer relays and that might not be a world we want. I guess it’s just this tradeoff we will keep exploring.

  • Phil: I both agree and disagree with Justin. Its not necessarily a burning issue and I agree with the market dynamic analysis of charging fees being difficult. Also the reason that we’ve avoided ourselves going in this direction because my meta-game theory is that its easier to upstream public good then a business just from an evolutionary point of view if we are stuck with this market structure for a long time. That being said I don’t think it’s totally outside the scope of relays and or if people want to experiment with that it’s whatever. That’s my FB native analysis so far. I do think it’s still worth the question of can we do better even though there are parties that are interested and aligned with running this infrastructure. What if a new entrant comes in and is super aligned but they don’t have the means. It seems to me there is no path right now that is super obvious and maybe that’s like a low hanging fruit that we can fix if we do want to view it and promote it as a public good.

  • I’m even thinking you Justin or Agnostic, it took some time for agnostic to run a relay and we were trying to prod them into that for a while. Its understandable there’s a lot of devops work, its distracting, its infrastructure heavy work that not a lot of teams are set up to do even if they want to or are aligned with it, plus its expensive and politically complex in the limit of having to come and present positions to other relays and things like that. I think there are barriers and we can reduce this more and that would be good so things like teams like ultrasound don’t relay on a big external treasury. Or agnostic doesn’t have to make as much of a business decision on how distracting this is to our core focus, kind of lessen that burden To get started maybe it’s worth doing.

  • Max Birge : Justin I think $100k per year is probably right if you take people away from that. The Aestus perspective is Austin and I are directly funding this with no funding at the moment. Our time is quite dedicated as well, we have looked for public goods funding for this from a variety of sources and have been turned down at every ask because I think there is a real fear of funding because they might impact the market. The role of the relay is ideally to be as neutral as possible. If we start excepting money from builders who are sometime keen to pay on side channel then we compromise on neutrality. I think if you look through the list of relays at the moment most of them are funded by a builder, side channel deal, or a company that’s making a profit through their own ERC-20 token. If we want to get past that we have to have this conversation at some point. At the same time I understand the engineering is tricky and you could just incentivize some kind of centralization which might be difficult. So I don’t know what the answer is but it’s a struggle for Aestus. I know when Austin was in Denver both BloXroute and Blocknative were talking about this being a burning issue as well. I think we should explore it further.

  • Phil: I think another way around this is to have alternative paths. So you could provide funding or maybe it comes with strings or requires a certain organizational structure to accept that kind of funding so new entrants have a choice. Do we want to immediately get into this market, take this funding and run a public good or we don’t want to run a public good and we have to find another path to make this work? But I do agree that public good option seems like very hard to make work today.

  • Justin Drake: One side note here on public goods funding. I have been invited to be a so-called badge holder for optimism and they basically have this retroPGF round number 2 and it’s roughly $25M. Maybe some of those $25M could be routed to relays, that’s like five lifetimes worth of funding for all the relays across five years. I acknowledge that maybe getting funding so far has been difficult, but this could change very quickly with things like the optimism retorPGF.

  • Phil: It is also an option to be more of an activist about this on the mev-boost side. Not to derail the conversation completely but it relates to the early per-cursor conversations of this call like should there be a foundation specifically for MEV public goods, how do we govern mev-boost, etc? I think these are intertwined questions because there are some universes where the mev-boost community leaders play more of an activist role in kind of building these funding rails and distributing capital and things like that. And there are worlds where governance and that side of things are more separate.

  • Max Birge: relays are in a privileged position right and it would be ideal if all the relays were transparent about how they are funded. Because if they are making side channels with builders the rest of the Ethereum community and validators may suspect that comes wit some compromises and it would be ideal to get that out there. Having a public channel for funding these things seems far preferential from putting relays in a position where they need to go and ask.

  • Austonst: Yeah, building on that just a bit more, I know talking with Blocknative this is one of there concerns. The lack of universal charge are the builders equally public funding model means that relays are pushed to create these side channel deals and Blocknative shared a couple of those with me. They made deals with certain builders to get prioritized access. This is an incentive that occurs because of the lack of a more universal funding method. I guess just reiterating that it’s worth considering the impact on neutrality that that may have.

  • Stokes: These are all really interesting points. One vision Phil was kind of painting is there is some entity going around going to Optimism, going to Gitcoin for quadratic funding rounds and these different PGF services and saying hey relays need x per year to run their relays here is why this is important and sort of doing that fundraising. Yeah its a real interesting point, probably preferable to keep it more decentralized so if relays could kind of figure out how to do this on their own independently that might be preferable, but yeah that being said if the mev-boost community would greatly benefit form more precise coordination there I think that’s very important to explore.

  • Phil: I agree, I will asterisks with I don’t think just creating a sustainable funding model with relays is enough to side-step the side channel incentives Blocknative mentioned. Like if the side channels offered you way more than the fees there is no inventive not to take the side channels especially if the current status quo for a lot of these are housed in for profit or ad hoc entities continues, there is not obligation not to make those deals. I think it would help a little but I don’t think it completely solves the problem.

  • Stokes: Yeah it certainly doesn’t solve it for reasons you gave but maybe it can make it easier for new relays to enter or existing relays to maintain just covering their base cost.

Call End, Open Discussion

  • Stokes: Anything else? This has been a great call, there has been a lot of good conversation. I suppose we will just open it up unless there is more on the relay funding point we could just open it up to discussion if nothing there we can go ahead and wrap up early.

  • Justin Drake: I do have one question around the validation node. Right now the ultrasound relay only runs a fork of geth as a validation node, simulation node. I think, I’m not sure, I think the FB codebase has only been built with geth in mind. At the relay level seems like there is some consensus bug with geth we don’t benefit from client diversity we have established at the validator level. Is it correct that the FB codebase only supports geth and if so are there steps to have more diversity?

  • Chris Hager: The FB infra built on geth, the validation logic itself though is rather straightforward and its a single additional JSON-RPC method that would be conceivably be easy to port to other clients like Nethermind or others. I think any of the teams can do that with kind of low effort and it would be great to have more diversity here

  • Phil: Just to give some additional color, the reason that is the case and its built that way, as we looked at merge landscape it looked like Geth would by form be the dominant execution layer client and so the ad hoc logic was like its less severe than the resulting network break would be and you probably would want to be conforming to the dominant logic in that chain split scenario anyway. I do think give that a lot of people have written execution clients it probably time to multiplex that. But that’s historically why it was built that way.

  • Justin Drake: One reason why relays may be especially interested in having other clients is performance. Geth is one of the slower simulation execution clients and so whenever you are not in optimistic mode and the value is greater than the collateral or the builder doesn’t have the collateral then having a very fast EL client is very important. I know Reth is quite a bit faster than Geth or Netheremind as well or Erigon.

  • Phil: I do believe Reth is being purposefully designed for that, speed. I do agree it will be useful to swap in with asterisks of consensus failures being possible if some of the optimizations shouldn’t really be there.

  • Justin Drake: One of things we have been thinking about is a multi-execution client system where by when a bid comes it you immediately simulate it with your ultrafast simulation engine which could be rethought then a few hundreds of milliseconds later you simulate it on Geth, but that’s kind of outside of the critical path. If for w/e reason there is a mismatch b/w Reth and Geth then for the next slots going forward you disable Seth and you just use geth. So in the worst case there will be one slot with a bad block. It’s kind of like optimistic relaying but for the execution clients.

  • Phil: Yeah that’s a good idea. To make this concrete Is the action item here to port this RPC to various clients? Does anyone want to do that work/who is going to do that work?

  • Stokes: I can definitely take the charge here. Yeah so there is a couple things. Chris has been pushing for block validation and there is this SSZ endpoint for building the block and agree with everything we said, t is important to have client diversity at other parts of the stack. I can definitely see what’s there. I have also been talking with @gakonst about this use case with Reth and yeah I think it would be really cool to see through. This sort of multi-client, multi-proof thing that Justin just described all sounds like good ideas. Anything else anyone has?

  • Phil: Maybe I want to pump my post about Geographic decentralization, I want to spread the meme and probably a broad conversation to have there but I’m curious to hear peoples feedback on the post also if you have some time to read it.

  • Stokes: There is a forum post in the chat, please take a look. I will go ahead and call it, thank you everyone for participating. I think this was a really interesting call, we covered all sorts of things. Again as we see there is a lot here to work through and discuss and thanks for everyone being orderly. And yeah and thanks again. Here’s hoping Goerli goes well . I feel hopeful for it, I will keep having these calls as we need them. I will see some of you around.

5 Likes