Demystifying remote attestation by taking it on-chain

Remote attestation is a fundamental concept when designing secure protocols using TEEs. One of the best ways to make it clear how this works is with an on-chain demo.

This demo is mainly possible through recent projects like Puffer Finance “RAVE”, which are Solidity smart contract implementations of verifiers for SGX remote attestation. We’ll walk through an end to end example, starting from the verifier and working backward.

To follow along with the post:

Verification of SGX Remote attestations using Solidity.

Let’s start with the on-chain instance of the attestation checker based on RAVE. This happens to be on Sepolia testnet, and is a verification contract written in Solidity.

The contract comes with a utility that decodes an encoded attestation. Essentially this is the Solidity equivalent of the “gramine-sgx-quote-view” utility from Gramine. You can use the “Read Contract” tab of Etherscan to query it. Here’s an example response from decodeAttestation.

[ decodeAttestation(bytes) method Response ]
  id   string :  3342293282022782940082890185166001797
  timestamp   string :  2023-11-14T18:22:53.362222
  version   string :  4
  epidPseudonym   string :  +CUyIi74LPqS6M0NF7YrSxLqPdX3...WiBdaL+KmarY0Je459Px/FqGLWLsAF7egPAJRd1Xn88Znrs=
  advisoryURL   string :
  advisoryIDs   string :  ["INTEL-SA-00161","INTEL-SA-00219","INTEL-SA-00289","INTEL-SA-00334","INTEL-SA-00615"]
  isvEnclaveQuoteStatus   string :  CONFIGURATION_AND_SW_HARDENING_NEEDED
  platformInfoBlob   string :  15020065000008000014140204018...58D2D8F75EAA
  isvEnclaveQuoteBody   string :  0200010...40679b71321ccdd5405e4d54a6820000000000000000000000000000000000000000000000000000000000000000
  userReportData   string :  9113b0be77ed5d0...00000000

For a sample attestation you can copy and paste into the decodeAttestation field on Etherscan , copy the following:


The “verify_epid” function is the main function in this demonstration.

function verify_epid(string memory userReportData, bytes memory attestation) public view returns(bool);

It can be found here: gramine-forge/src/MyRave.sol at master · amiller/gramine-forge · GitHub

The high level pseudocode is this:

  • The root public key from Intel is hardcoded (but can be independently found from many other places, i.e. gramine source code, web archive snapshot)
  • The attestation is parsed into a report and signature from abi encoded bytes.
  • The signature is verified against the root public key, as a signature over the report data.
  • The MRENCLAVE value from the report, which is a hash over the enclave program, is matched to a reference we pass in
  • The “userReportData”, which is set by the enclave when running, is also matched against a reference.

Note that the root of trust here starts with the manufacturer, and getting around that is out of scope for this article (although hopefully you can see how smart contracts would be useful for coordinating multiple enclaves from different manufacturers…).

The MRENCLAVE is the hash of the enclave program binary, analogous to how the address is a codehash over the contract bytecode. In this case the MRENCLAVE refers to the dummy attestation enclave explained in a moment, which is e3c2f2a5b840d89e069acaffcadb6510ef866a73d3a9ee57100ed5f8646ee4bb.

Checking the signatures relies primarily on verifying an RSA signature. These have already been implemented in solidity, so these are through imports from dnssec-oracle.

Dummy Attestation service in SGX/Gramine/Python:

Let’s look at the enclave that generated these attestations. I defined a dummy enclave that generates remote attestations for any userReportData chosen by the user. It attests to anything, so it’s not useful for any secure application. However, an attestation can only be generated from a valid SGX machine, so it’s not a trivial function to satisfy either.

To make it as conceptually simple as possible, the dummy attester enclave is implemented in python using Gramine:

  • It reads a 64 byte string from stdin.
  • Writes this to the attestation interface /dev/attestation/user_report_data
  • It reads a result from the interface /dev/attestation/quote.
  • Return this quote.

Note that you can try reproducing the enclave binary, without having to have an SGX machine at all. The main limitation of this python/gramine approach is that it seems more likely to suffer reproducibility rot, as the particular python libraries packaged here may not remain identical when package maintainers update them. But maybe this can be mitigated. The README does list a particular mrenclave, so you can see if it remains reproducible using docker.

As a best effort service: for a little while following this post, I’ll provide this service that generates fresh attestations to play with on demand:
(If the service goes down, it will remain possible to run your own enclave using the docker compose instructions)


This on-chain Solidity attestation checker in this example is just a starting point. It’s necessary but not sufficient. Any application or framework using remote attestation would need to add some more constraints to the verification. Most enclaves don’t get a straightforward “OK” but come with warnings - which configurations should we tolerate? Do we only care about a single program or a range of acceptable programs? The structure of the userReportData itself is also up to the application to define. And what about DCAP, as this demo appears to be EPID specific?

Answering any of these questions requires deeper understanding of SGX to solve… but, it may only require writing more Solidity code. The main takeaway from this demonstration is that verifying remote attestations is a naturally on-chain activity. The availability of Solidity libraries for verifying remote attestations now makes Solidity a very plausible way to define remote attestation policies.


Follow Up: Now we have a Solidity-based verifier for DCAP as well!

In the last post I focused on smart contract verification of SGX attestations using RAVE. If you’re following SGX, you may have noticed that we covered EPID attestations, but this will soon be deprecated in favor of DCAP. Fortunately, Automata have just open sourced automata-dcap-v3-attestation, their counterpart to RAVE, a suite of Solidity contracts that verify DCAP attestations.

Exploring this library marked a milestone for me, since it was the first time I’ve gotten an end-to-end verification demo of DCAP to work on my local SGX, despite trying several times before. Because I used the same gramine-dummy-attester enclave from the last post (just switched from dcap to epid), this demonstration also shows off some modularity of the web3 TEE stack: The Automata developers and I used very different enclave frameworks for development (I used Gramine, they only use Teaclave). Regardless, the Solidity code verifies the attestations produced by my enclave just as well (with small changes documented here). Instead of a demonstration on Sepolia, I’m settling for checking in the attestation I generated as a passing test vector, and skipping straight to the insights.

Moving to DCAP reduces reliance on Intel.

The main reason to prefer DCAP over EPID is that it cuts out an unnecessary round trip of interaction with Intel’s IAS service. Besides latency, this would allow Intel to censor individual applications. With DCAP, this interaction is removed. The only remaining interactions with IAS, specifically provisioning a device and fetching TCB Infos, do not involve application identifiers at all so they don’t have this problem.

A Solidity verifier can make switching to DCAP easier.

Almost every prototype demonstration of SGX, if it includes remote attestation at all, stops at EPID before DCAP. Why is this? One reason is that unlike EPID where there are somewhat generic tools like gramine-sgx-ias-verify, there aren’t as readily available tools for verifying DCAP. So, having a Solidity verifier available serves as a portable reference.

DCAP benefits even more from on-chain accountability.

A second reason DCAP is tricky is that verification involves an extra step of dealing with “TCB info” packets. These are signed messages from Intel, indexed by processor family, and that are updated during TCB recoveries (like after vulnerabilities are announced). This process would benefit from being stored on-chain (like Automata’s contract does, or otherwise at least referenced on-chain) in order to keep Intel accountable for these messages (discussed here).

Note that in Automata’s demo, the TCBInfos are posted on chain, but they must be provided by the contract owner. However, the signature from Intel is not checked on-chain. The best way to describe this scenario is that the contract provides accountability for the Automata developers, but skips an opportunity to do proactive checking. The contract owner might be expected to keep such signatures around for inspection off-chain, but whether anyone is paying close enough attention to hold them to it is unclear!

Conclusion: what next?

Support for TEEs in web3 has come a long way. Solidity verifiers for DCAP remote attestation is a great milestone. What else is next to do? The following stood out after finishing this DCAP experiment:

PCCS still needs simplifying.

The most frustrating part of setting up DCAP was the need to sign up with my email address for an Intel API key, this time for the “PCS” service. This is frustrating because it looks like another Intel bottleneck, which would defeat the point of DCAP. But actually this step is entirely unnecessary.

What’s going on? Basically DCAP relies on a certificate chain. The certificates are accessed during attestation, so a high trust system (the quoting enclave) is responsible for fetching them. Similar to TCBInfos, these are indexed by processor type, and are updated periodically. Since they can be authenticated, and they don’t depend on the application or the particular physical processor, these could be mirrored and provided by anyone. However, Intel only gives them out directly if you use an API key. And the default implementation (pccs) only guides you through this path. If we made a lightweight alternative to this caching, bypassing the need for an API key, we would remove another barrier to entry for DCAP demos.

Need for testing under diverse system configurations.

Needing to modify Automata’s code at all was because of my enclave behaving differently than theirs… both of which were within the spec, but the implementation didn’t handle the general case. (this has since been patched) There seems no good replacement for verifying compatibility under a variety of different system configurations. This is an opportunity to coordinate since different teams are likely to have different configurations, and maybe we can exchange continuous integration for coverage.


Unfortunately, SGX EPID remote attestation has been deprecated and will be offline in early 2025.

So, DCAP (aka ECDSA) is the future.

(off-topic: Recently, There’s been a critical vulnerability about AMD SEV, so I believe it is not reliable. I’m interested in ARM CCA, but it is in an early stage, and I can’t find a selling product yet, so SGX is still the best TEE tech IMO)

I’m the Phala Network co-founder and worked for its EPID support. I implemented DCAP support recently.
Here’s our implementation (in Rust):

The theory looks simple:

  • get the quote and a bundle of collateral for the quote
  • validate the integrity
  • parse data
  • extract fields from parsed data (TCB levels and enclave info)
  • do some algorithms to match the best suitable TCB level
  • sum up the verification result
    In this process, you need to:
  • examine x502 certs, including validating cert chains and CRLs
  • parse DER and get fields from DER objects
  • parse JSON and get fields
  • For loops for iterating data and calculation.
    However, I think these parts are challenging to be implemented in Solidity.

Although I’m somehow in a conflict of interest, I am still willing to see TEE work great with EVM chains.
I wonder if we can combine ZK technology to make the validation off the EVM, transforming DCAP verification to a ZK proof, which would be easy to validate on-chain.
I know the current ZK VMs are still slow, but the good news is we don’t need to verify DCAP quotes frequently, so the disadvantage is acceptable.

I may have time to do some PoC soon in my holiday


This is a great summary and breakdown, thanks for sharing this rust impl too.
Don’t forget to see the follow up post in the thread: Automata did indeed do a pure Solidity implementation here automata-dcap-v3-attestation though there’s still a bunch to improve on

Thanks Andrew for the insightful writeup. I guess the next steps are clear, to improve on the usability of the existing solidity DCAP verifier and I guess provide a more developer friendly breakdown on the actual attestation workflow for DCAP.

For the former, we are currently working on a design for a Web3-based PCCB service so look out for it and let’s have a conversation about it. We have also pushed an upgrade where we replaced our naively implemented secp256r1 signature verification library with a much more optimised version (GitHub - daimo-eth/p256-verifier: P256 signature verification solidity contract). Would highly recommend reading the paper that describes the optimisations that are used (

1 Like

Maybe a noob question but why RSA?

As far as I understood, remote attestations on SGX relies on P256 curve.

In SGX, the about-to-be-deprecated “EPID” uses RSA signatures, while the current standard “DCAP” uses the P256. So the P256 is the more relevant one. This post just started out using RAVE which was developed a little earlier so it made sense at the time to target EPID.


After a long time of silence,
my teammate @tolak_eth has finished the on-chain verification.

See the doc here: zk-dcap-verifier/ at main · tolak/zk-dcap-verifier · GitHub

The whole workflow has worked now!

1 Like

I have a simple question - when it comes to remote attestation, who sends the request and when does this happen? Or has this not been decided yet?

Hi banr1 :smiley: feel to rephrase if i didn’t pick up that question exactly. In this workflow the attestation is carried out by a kettle when it joins the network. offchain_Register is the first function a new kettle runs to join the network, this runs in the coprocessor and generates the attestation. onchain_Register puts it onchain

1 Like

Sorry, my question was not clear.
In the SUAVE Chain, I believe TEEs like Intel SGX will be introduced in the future. In that case, will the end user be the one sending the remote attestation request to Intel SGX? And if it is the end user, when will they send it? Will it be in a form where the user can send the request at any time they want to perform verification?

@banr1 you can already experiment with this through GitHub - flashbots/gramine-andromeda-revm :slight_smile:
All the kettle operations can be found in this repo GitHub - flashbots/andromeda-sirrah-contracts: forge development env for SUAVE key management.

In that case, will the end user be the one sending the remote attestation request to Intel SGX?

Right now we imagine it’s the kettle operator (whoever is running the TEE) performing the remote attestation (with intel or otherwise) - currently this is done via Gramine’s attestations. The end user can then check that the quote was verified on-chain. The end user could request remote verification too, but that’s sparsely needed and more of an edge case.