Using remote attestation as part of a confidential computing environment, you can verify that a remote server is running an exact set of trusted code. Attestation identifies an application by "something it is" rather than "something it has," greatly enhancing the granularity of and trust in an application’s identity.
Authentication at the application level is challenging. Currently, services authenticate themselves by demonstrating ownership of something like an x509 certificate, API token, private key, or IP address. However, most application-level identity relies on "something you have," and if that "something" is leaked or stolen, it can be exploited to impersonate your application.
Sensitive data, such as API keys, cloud credentials, and certificates, can be stolen and misused. Even networking-related resources are vulnerable to theft. Cloud IP addresses and DNS subdomains can be hijacked, and attackers can manipulate DNS servers to compromise the resources used to prove your application’s identity.
By leveraging confidential computing, you can significantly enhance your identity management by using "something you are" as a form of application identity.
Creating Truly Verifiable Services from Source Code to Running Applications
In confidential computing, applications run in hardware-based trusted execution environments, also known as secure enclaves. If you are new to secure enclaves, we recommend reading the introductory guide What is Confidential Computing?
Secure enclaves possess the capability to generate an attestation document, which are essentially verifiable "fingerprints" (measurements or hashes) of the code running inside the enclave. Conceptually, it extends a Trusted Platform Module's (TPM) measured boot integrity check beyond firmware and kernel levels to the application level.
Attestation documents are signed at the hardware level, allowing anyone communicating with an enclave to verify the document's authenticity. Clients can verify the attestation document directly, similar to the verification process for a JSON Web Token (JWT), or they can delegate the verification to a trusted third party, just as OpenID Connect or SAML interact with an identity provider. Nonces or timestamps ensure freshness to prevent replay attacks. For the gory technical details, refer to the IETF Remote Attestation Procedures (RATS) Architecture RFC.
With an attestation document, we possess an unforgeable and unstealable fingerprint of an application, ensuring the code and data run in a secure environment and have not been tampered with. By matching the fingerprint to a trusted application, the attestation document can authenticate the application's identity, guaranteeing that the running instance is unmodified and trustworthy.
So how can we use attestation to match this “fingerprint” to an application? There are two scenarios.
Scenario 1: Trusting the Application Author
When we trust the application author, attestation can be employed to verify that the online service has not been compromised (e.g. serving malware with a stolen TLS certificate). The application author can build the application and publish the expected attestation measurements for each release to a public transparency log. It’s similar to Sigstore, but Sigstore is limited to verifying local software that we download before running locally. With confidential computing, we can also verify online services at runtime, even if they are already running.
Scenario 2: Verifying Source Code and Dependencies Too
In the world of local software, reproducible builds enable us to verify that no vulnerabilities or backdoors were introduced during the compilation process of source code. Projects like NixOS invest considerable effort to enable the creation of bit-for-bit identical artifacts from source code. Traditionally, reproducible builds are useful only for software run locally, as it was impossible to verify the code running on a remote server. However, with confidential computing, attestation closes this gap.
If the source code is available, you can audit it. Then, you can build it yourself as an enclave in your own trusted environment. The resulting measurements become your known-good reference measurement. Subsequently, you can retrieve an attestation document from a deployed service, verify it, and compare the service's measurement to your known-good measurement. If the measurements match, it proves that the deployed service runs the intended code and not a malicious version.
For the first time, we can extend trust from the source code and its supply chain not only to its software binary but also to an actual instance of remotely running software, an achievement previously impossible without confidential computing.
Implications for Security and Identity
In both scenarios, attestation provides protection against malicious actors. If an attacker gains access to the IP address or DNS hostname, they cannot generate valid attestation documents, allowing clients to abort the session. Even in the event of a supply chain attack where malware is injected into the service, the attestation document will yield different measurements, signaling clients to abort the session, despite the compromised service presenting a valid API key or credential.
A Cookbook for Truly Verifiable Services
Here’s an overview of implementing attestation:
Now that you have known-good measurements for a running application, you or your clients can attempt verification:
Remote attestation in confidential computing enables unforgeable and unstealable software identity based on "something you are'' rather than "something you have." Ensuring the exactness of a remotely running service, down to the last bit, becomes possible. While this process may seem complex, Anjuna's Confidential Computing Platform provides abstraction and automation that simplifies the entire process.
If you want to learn more and see Anjuna in action, sign up for our next live demo.