SUAVE Ensuring Output Validity and Heterogenous DA

Edit: Note, this discussion precedes the current specs and those should be referred to for concrete details - this post is an artifact of initial brainstorming . The post also makes use of the term “kettle” which should be thought of as a TEE running the MEVM and “brewer” which is the operator of the kettle infrastructure.

While kettles provide integrity guarantees, brewers (kettle operators) still have degrees of freedom over inputs to the kettle. For instance, a brewer may bid in an OFA and attempt to censor competing bids. We have thought to combat this by providing a DA module which allows contracts to encode conditions like “this function will only execute if all relevant inputs in the DA layer have been processed.” Enforcing this would require some way of ensuring that kettles are executing against an up-to-date version of the MEVM & DA module.

Questions:

  • How do we ensure that up-to-date DA/MEVM contracts were used to form an export block? Verifying a TEE certificate?

  • How do we support a contract on SUAVE allowing for a certain transaction to be included in an ETH block at SUAVE block height n, but then the transaction is no longer allowed at n+1 (e.g. because of an oracle update)?

  • We may want heterogenous sources of DA - i.e. multiple DA committees doing DA for different use cases. For example, a UniX committee and a committee for CoWswap-on-Solana. This could be useful for scaling, but also satisfying the varying preferences and trust assumptions of the apps who may want cheaper DA, lower latency or trust a certain set of nodes. How do we support multiple sources for DA? We could possibly do this via encoding DA sources in contracts.

  • If we can do the above, can we allow contracts to live anywhere, not just on SUAVE consensus? This would have SUAVE resemble Anoma a lot more closely.

  • One solution would be to have the proposer pass in the ETH parent block and latest SUAVE block hash in the getHeader call. However, it isn’t clear if its in the proposer’s incentive to do (not doing so might enable them to decrypt and exploit some transactions, especially when colluding with a kettle). The nice part of this approach is that this could mean that the target chain could own more of the stack - builder contracts and DA while kettles just handle execution.

3 Likes

One of the most important SUAVE open questions IMO! Do you have a link to how this is reflected in Anoma architecture? I’m admittedly behind on my Anoma resources.

I think this would tie back to security guarantees - smart contract code being ran in suave is trusted, so any source of code has to be as well.
I think smart contract code can live anywhere it wants, and the question is what code could we trust inside TEEs - relying on suave consensus seems like a good option.

I think this would tie back to security guarantees - smart contract code being ran in suave is trusted, so any source of code has to be as well.
I think smart contract code can live anywhere it wants, and the question is what code could we trust inside TEEs - relying on suave consensus seems like a good option.

I’m not sure I fully understand. Is this what you were saying “TEEs give integrity guarantees so it doesn’t matter where the code lives as long as we can verify that the TEE is running the code. SUAVE chain is a fine place for hosting and updating code, but really the code could go anywhere”?

How do we ensure that up-to-date DA/MEVM contracts were used to form an export block? Verifying a TEE certificate?

I’ve been thinking about this a bit more and have come up with a few options, which could potentially be used in conjunction:

  • Attestations: attestations provide a checkpoint, but this requires two rounds of communication (update SUAVE chain, then post attestation in next state update) so its useful but misses out on really time-sensitive things

  • User checking: a user can encode in their transaction the SUAVE block height (or similar) which a TEE must have ingested for the transaction to be processed. Users could even stipulate a future block to account for propagation delays. The downside of this is that users must be following suave chain closely and can be bamboozled without sufficiently strong light-client guarantees.

  • Proposer checking : The proposer/target chain could ensure that sufficiently up-to-date suave state was used to produce an export block by attaching a requirement for a TEE proof in the target chain’s validity rules (PEPC style). The validators of that chain would have to be running a SUAVE light client to make sure that the proof is sufficiently up to date.
    A weaker version of this is the proposer doing this out of protocol, but I’m not sure what the incentive for that would be.
    You could maybe get out of the requirement to run a light client on the proposer chain by taking export blocks from a relay contract that compares signatures over blocks and disallows anything that is too old compared to the newest blocks received. However, the operator of the relay contract would need not to censor which brings us back to where we started.
    The other way out of the light-client requirement is having the builder contracts/DA live on the target chain. This is what Anoma is doing, but has the problem that your MEV-time block time ends up being the same as the target chain (removing a the value of the DA guarantees really). This is certainly appealing for blockchains that have a block time we can’t improve on (if such a setting exists)

I don’t have a link immediately available. The way you should think about Anoma is that they have builder contracts defined anywhere (but presumably on the target chain) and then their off-chain executors (solvers in Taiga), prove in ZK that they have done solving correctly. Thus, the solvers aren’t really tied to any specific set of builder contracts. The solving aspect in their current model kind of requires users to send their tx to solvers in plaintext though