Multi-kettle communication

I am putting some thoughts down after a conversation with @fiiiu and @socrates1024, in which @fiiiu refused to accept my opinions without elaboration. My basic point is that we should implement two types of kettle communication - no-guarantees/best-effort and consensus - before we consider anything else. I haven’t dug too deeply on this.

By "no guarantees "I mean that we build something that just enables basic message passing. Perhaps this can be done in multiple ways if needed. The most basic use case I have in mind is a kettle running a solver sending an Eth transaction to an Eth block builder. RPC calls as they work today are probably fine. The things which seem like we may want over and above RPC transaction submission is a protocol for discovering kettles of a certain type and a protocol for proving identity+what application is being run.

Consensus I think is relatively well defined.

AIUI, in communication protocols, there is usually a performance/security/guarantees tradeoff where having more of one usually means you get less of the others. Consistency in a database (I think) is the strongest guarantee a SUAPP use case could need. The “no guarantees” case is the best performing we could get. Of course, we know you can’t have consistency and availability but since Ethereum every new protocol has favoured consistency and I would imagine this trend to continue for us.

image

My argument consists of two points:

  • we cover all use cases relatively well by implementing (n=2f+1 BFT) consensus and a best effort protocol. Every use case likely has some protocol that makes the best tradeoffs for its use case and the ideal case would be having that available for that use case.
    • Since consistency is the strongest thing we can provide, it always satisfies the “communication guarantee” part.
    • Consensus to get consistency might be costly in the latency category. We can accommodate for this by reducing the security parameter, n (and maybe moving nodes closer together). One motivation for why we could do this is that requiring consensus replicas to run in TEEs makes byzantine safety consistency unlikely
    • For the many use cases where we just need simple message passing, we don’t force the overhead of consensus so the worst case is avoided.
  • in the space of things we could implement these two types of protocols seem to have the highest bang for their buck. We have mature off-the-shelf consensus protocols that we can probably run mostly out the box (tendermint being the top example). We know that consistency is a useful property and covers many applications. We also don’t have a lot of evidence that suggests there are many use cases that could benefit significantly from us putting effort into developing something weaker than a consensus protocol.
    • We know from our ongoing research grant on censorship resistant DA that there are options we could pursue outside of consensus, but it seems these may require a lot of work.
    • The one example we have of a decentralised orderbook (for which we speculate consensus is too expensive) is something which dYdX has spent over a year trying to figure out and feels far from a low hanging fruit.

Of course there is a complex space of possible protocols and the jury is still out on what the CR-DA grant with Common Prefix will return, but at least this is my position for now. It might make sense to give grants to do work on other communication protocols without distracting our internal resources.

2 Likes

thanks for humoring me :wink:

what I understood you claimed in the call is that for “intermediate” use cases we could always use consensus, which provides stronger guarantees. my claim is that same argument is also valid for the use cases that fall into the “best effort” category, so essentially what I was asking for is: do we have compelling use cases for not just doing consensus?

a bit more context for this question is that we are now thinking (or at least I am!) of inter-kettle comms as separate from the “SUAVE Chain”, ie where the code of the smart contracts lives, which does require consensus. so this is two separate consensus protocols, which begs the question of whether we can do just one. if we just have consensus for inter-kettle this seems more feasible that if we have both consensus and best-effort.

I’d love to learn if you/others are also thinking of inter-kettle comms as separate from the chain because maybe I’m missing something here and you’re indeed thinking about the same consensus (but in that case I don’t get how we can have a part that’s separate and that’s “best-effort”).

finally, another point is that I wonder how much potential “TEE-enhanced richness” we’re leaving on the table by us just considering these two models. as you point out running (whichever model we end up choosing) inside the TEE gets us some properties for free, like no byzantine safety violations. but as @socrates1024 pointed out TEEs still leave open byzantine selective messaging (which is also not equivalent to a crash fault). altogether this tells me that the threat model being different than the usual one might make us want to think deeper on what properties we want and how the tradeoff space is deformed by the use of TEEs. (I do get your point on “we have consensus protocols that work” though :slight_smile: ).

Yes. Consider

  • a kettle-based solver needing to send transactions to a block builder.
  • an OFA auctioneer kettle querying solver kettles

I wasn’t trying to be opinionated about this, but here is how I see it. Consensus provides us consistency over a database. We can segment databases into three:

  • local - only at one node, doesn’t need consensus
  • global - all nodes track this, this is the SUAVE chain idea
  • specific - these are data bases for a use case or set of use cases that only some subset of kettles track. Bulletin chains fall in this category.

For all of these we can always leverage an existing chain to post our data, but we also have an option of running our own instance of whatever protocol.

Aside from maintaining a set of accepted TCB measurements (MRENCLAVE’s) or perhaps a PKI, its not obvious we need a global database. @ferranbt has been working on implementations that don’t require it at all.

I believe we do need specific databases to meet the scaling needs and tradeoffs of different use cases.

AFAIU, TEEs give us something “extra” which we can spend by:

  • having the ~same performance with extra “defense-in-depth” security (e.g. because replicas can’t equivocate)
  • having improved performance and relying on TEEs to allow us to use a weaker adversarial model

We have considered the second point, but its two main critiques are:

  • we actually don’t know of any protocols we could apply here and we would likely need to put in a lot of work to implement something. We are still waiting to hear what the grant returns on this
  • this makes a TEE failure more impactful. This may or may not be a big deal depending on the use case.