Notes from the PBS Researcher Roundtable Discussion @ Flashbots Workshop @ Devcon VI
Format: three, 1-hour sessions with a discussion leader
Goal: To bring together the PBS researcher community for an engaged discussion about three areas of exploration.
Participants: 35 invited participants from both industry and academia
*A note on these notes: I took these notes while also managing event organizer responsibilities, so they are unfortunately partial. If you participated in the discussion and notice I missed topics, please add a reply with additional notes and I will update this document :)!
Session 1: PBS Seen from the Protocol
Leader: Barnabé Monnot
Slides (Google Drive)
Questions during the talk:
(Slide) The future of PBS
- The block is validated in the delivery step (slot 2)
- If a builder releases an invalid block, you follow consensus rules
(Slide) Proposer-builder separation
-
If you have a slot auction, people can’t predict how much MEV will be in the next slot
-
Winner’s curse will ensue because in the auction without the ability to predict, the winning bid will have overpaid
- This could be centralizing
Discussion Notes:
- Is enshrining this auction in the protocol the best option?
- Is true PBS is impossible?
- The signature is a central part of making the block, proposers do something central, what if there is collusion?
- Vertical integration does not always increase efficiency
- If we recognize and respect the proposer’s role (without them fully building blocks) that is a better long term plan
- The block requires a signature, a partial builder is a better alternative than full enshrined PBS
- See PEPC
- Caveat- not yet fully implementable and needs further consideration for mechanism design
- The bid selected by MEV smoothing is the highest
- The smoothing is an API that forces you into the smoothing union
- If you distribute this, you concoct an attack strategy where you never actually have to spend money
- What research items would help?
- Understanding if in protocol PBS elicits the true value of the block
- There is a difference between a trustless builder and a distributed one
- Is trustlessness even possible? Trust minimize is more realistic
- Trust minimize merging mechanism is needed here
- Is it possible to do this trustlessly outside of the protocol and receive the best bid or will the API need to be different to realize this?
- If the proposer reorgs the bundle, they can get more money
- The timestamp that was present in PoW and is no longer reliable in PoS (disagreement about this statement from the crowd)
- Where does the proposer receive their compensation?
- With in protocol PBS, you would have to put the capital up front and the builder gets the MEV as the revenue with cost being the bid
- If we have distributed builders, can we still have a dark pool and is there leakage?
- It depends on the design- if everything is encrypted under an SGX key, less leakage
- With encryption, you can complete computations to calculate MEV and set your bid still
- What is the liveness risk?
- With either MEV boost or in protocol PBS, you rely on the parties with txs being motivated to turn them over and get them included
- With in protocol PBS, anyone can spin up a builder, so there may be less risk
- In the ideal world, users get the value from the positive externalities of their tx
- In permissionless systems, user behaviors are deployed, but then they cannot control what happens next
- Analogy to a loaded gun put on the table and then the ecosystem does what comes next, but they do not have control over next step
- Is the goal to have non proprietary transaction flow?
- Encrypted transactions would reduce incentive to send transactions privately
- Is this the protocol’s goal? Does the responsibility to avoid centralization lie with the protocol to ensure this meaning it must be considered up front before implementing protocol changes?
Session 2: Builder Innovation and MEV Mechanism Expressivity
Leader: Xinyuan Sun
Slides: (Google Drive)
Questions during the talk:
(Slide) Mechanism Design Difference
- Is there a way of changing the MEV game so that it is not discrete?
- By discreteness, it means that the outcome depends on the private information
- Look at statistical properties of different types of transactions- some types will result in more or less amounts of MEV?
- What utility function do you want your coordinator to use?
- Social welfare, defined as a multidimensional measurement that is not collapsible, but we often oversimplify the utility to something like capital returned to the user
- The coordinator is information rich and can find optimal routing
- But for instance, if you are losing money due to slippage, it will depend on the other contents, so we need to look at the statistical properties of the type of transaction if you are trying to optimize value for the user
(Slide) You are moving credible commitment to users
- The reason we are using crypto, is to make a credible commitment beyond what is available outside of crypto, so max MEV in crypto should equal MEV in the outside world
- So if we solve the MEV problem in crypto, do we solve real world MEV?
- MEV in crypto is often just capital
- What happens if we add SBTs? Can they be collateral and then is this collateral and off chain thing that truly represents the world at large?
Discussion Notes:
- More expressive bid languages change the dynamic
- Compared to the status quo of the whole block being auctioned off at once, is there a better way?
- Concretely, CFM routing or Cowswap moving the bidding location changes this
- CFM- it doesn’t have to send you back a transaction or message. You can encode in the original transaction what will happen
- Should slippage tolerance change?
- The utility is linear- if you increase slippage, you get more kickback
- But you get more routing which increases the sum of money so when transferred back, everyone gets more money
- Concretely if everyone uses the same pool, you should use another one
- If someone creates an arbitrage opportunity with a swap and both get the arbitrage quote at the same time, transactions submitted at the same time, the arbitrager gets extra, but some value transfers back to the other trader
- MEV is discontinuous
- Shift theory- you are a space and in each point of it you put a set of the things that can happen. As you move through the space, the model is made based on what things could happen as you pass.
- The space is constrained by the parameters you set at the beginning. The web that creates as you move through the space would help you to understand the whole situation of MEV
- If you have fees calculated with a neural net and you commit every N blocks with a new neural net, would this solve the discrete/non discrete problem?
- But it is not possible to optimize all routes using path theory because it is computationally impossible
- The Cow swap solution addresses this problem
- If you find as a builder a way to improve the path, you can offer this path
- Is there a reason why this should be done at the application or protocol level?
- It can be done at any level
- But value will accrue at the level it is executed
- Having a richer bidding language
- If you have slippage tolerance, is that a way of expressing your bid to the network?
- Yes, constant fx market maker is an expressive way of bidding for a trader
- Social welfare
- Social refers to some of the welfare of all agents, but agents are not IDed- how do you deduplicate agents?
Session 3: Private Block Building
Leader: Alex Obadia
Slides: none, but see below until “Discussion Notes” for Alex’s notes on context:
PBS markets, centralized
the system needs better guarantees
topic 1: private searching
-
what are important requirements?
-
latency
-
tradeoff of economic efficiency & privacy
-
programmable privacy - what kind of expressivity do you want there?
-
would want to properly define what expressivity means, what’s the definition of
the space
-
from the searcher perspective how does that look like, how is this space matched?
-
technical solutions
-
MPC might not be great for centralization reasons, committee is small usually for performance reasons and might not have desired properties that we want
-
sooo FHE?
topic 2: distributed merging
topic 3: say we achieve privacy, does the economic efficiency we incurred there make it so that the race between builder is in algorithmic efficiency & latency?
Discussion Notes:
Private searching
- Builders can see bundles and flow
- Searchers can also see flow to simulate against it
- Why this matters- MEV centralization comes from information asymmetry
- The solution
*Some suggest a private mempool
- What would it take to do building in a private way?
- What is the goal of considering private building?
- Privacy is a requirement for trustless collaboration between searchers who have different trading strategy specialization who can collaborate without sharing more information
- There is an informational advantage to building even privately
- We are only robust in a model where we can trust more than K of N
- But just because someone has information access, does not mean that they can specially exploit the MEV
- For instance if the bid is required before the information (like future block markets)
- You can buy access to future block space, which includes MEV but only back running
- But collusion of K of N without the ability to slash for data availability is a risk
- If K=N-1 then you have the last holdout problem
- What are we trying to do here?
- No one gets an informational advantage?
- In the current system, there is a huge informational advantage and asymmetry
- Does this really just mean we exploit all possible advantages
- Should we encode arbitrage in consensus and redistribute the rewards? This is pure Monarch extractable value
- But surely there is a spectrum of what to express
- Full privacy with SGX but also full coordination
- What do you observe from the SGX?
- Whatever you program it to do
- If you use a threshold assumption- SGX creates bids and the committee signs on that and the only way to produce the full block is if signed- this would be full bundle merging in SGX
- How do we ensure that the algorithms in the SGX only do one or the other type of MEV?
- Assume that the algorithm is welfare maximizing? How could we possibly ensure that it is a trustworthy and goal oriented algorithm. Is the only reason to put it in SGX if it is a private algorithm?
- There would need to be a return to user that maximizes the user utility regardless of what happens in the SGX
- The auction mechanism is an open problem
- Cowswap and Penumbra tech is relevant here
- There is possibly an impossibility result that the general algorithm can solve MEV verifiably- there is always a force for you to not do the right thing for the system for your own benefit, that is the reason information has value
- You can construct a mechanism that allows users to express their preferences for privacy and reward for lack of privacy
- Why not just have a fully encrypted pool?
- If you are a sophisticated searcher, you will use statistical analysis of past trends to predict transactions
- Note: see Dankrad Feist’s research on encrypted transactions
- Note: if the algorithm only has access to past and not future information, you will limit MEV other than backrunning
- More interesting to consider a perfect SGX for considering the solutions, then to move onto looking at the systems issues with SGX (like side channel attacks)
- Prove SGX use with remote attestation
- But SGX attacks are a very real threat, or also injection of mafia MEV extracting code into the enclave
- Also we cannot necessarily stop searchers from printing out txs to command line unless they must run it in a VM in the SGX
- Where is the SGC running- they will use side channel attacks
What about non SGX solutions?
- MPC: Bound to communication network, so it suffers from latency
- Simplified things could run here: simplified bundles could be simply ordered if only touched once
- Collusion is a problem through- need to assume ⅔ honesty
- FHE: problem with who has the key, so need threshold FHE which is security equivalent to MPC
- DARPA with Deprive thinks they can get 100x improvement in speed of FHE latency
- But cost is an important problem
- And even with this improvement, the speed is too slow top run the EVM in FHE
- This seems like a bad idea to many in the room to run so much in SGX including GETH