Frequently Asked Questions
Confidential Computing is a hardware-based technology to protect code and data in ways that software alone cannot. It uses capabilities in CPUs that already exist in a company’s data center or on compute instances that are used in the cloud. It is a widely supported infrastructure - but it needs turning on and enabling, which is why Anjuna Seaglass exists as a runtime for making it easy. Take an application, run it isolated from risks, done.
Confidential Computing has two pillars:
- Privacy over data and code during processing: This fills the security gap between data at rest and in transit. Confidential Computing uses powerful hardware features on the CPU (Trusted Execution Environments, or TEEs) to ensure confidentiality, integrity, to prevent access to the memory of the running application during processing, and protect against hypervisor threats, kernel vulnerabilities and root users.
The application running inside a TEE is the only one that can access its memory in the clear which enables secure operations like decrypting sensitive datasets or running AI models on private information without exposing code or data.Think of it as digital battle armor for applications: software protections alone are vulnerable when data is in use, but hardware isolation ensures trusted, private execution that the owner—not the operator or cloud—controls. - Enabling proof of trust and Identity for a workload: Confidential Computing enables identification of code in a way that cannot be spoofed. When running the software in a TEE, the hardware can provide cryptographic evidence that the software is a) running in a TEE and that only specific software is running - no unauthorized code. This evidence cannot be spoofed by software.In that way, clients connected to that software can trust it and share sensitive data with it.
The result: simpler security, faster development that does not require effort to attempt to isolate risks or eliminate data security concerns, reduced risk, and easier compliance.
Anjuna Seaglass is a multi-cloud and on-premises software enabling platform that allows running any application in a TEE without requiring any modification to the software. It provides out of the box transparent solutions to leverage the power of remote attestation, and supports orchestration platforms like Kubernetes.
Confidential Computing can be used with most applications. Anjuna Seaglass doesn’t require code changes or rewrites—it works with AI/ML workloads, databases, APIs, analytics pipelines, and microservices. If it runs in a container or VM, it can run inside a confidential environment, only more securely.
Typical applications include:
- Cryptographic services or applications where keys, key caches, and code operations need fundamental integrity and trust.
- Key and secret managers that are handling sensitive key material or sharing it, caching it.
- AI Agents and MCP servers handling agentic workflows with sensitive data, credentials.
- Key cache layers for creds, tokens, keys - like Redis databases operating in memory.
- 3rd party container applications that require high levels of isolation from Admins, insider risk, and operators.
- Blockchain applications operating with keys assembled from components, validator nodes, blockchain to real-world data interfaces, and so on. These are ideal candidates, as trust is critical, with separation from operators for trustworthy blockchain operations.
- Enterprise applications processing PII, PCI or other data.
- Applications operating in partner clouds, on 1st party data.
- Data processing and preparation applications.
- Kubernetes application services requiring isolation, e.g. those managing PCI DSS regulated data, PII in confidential pods.
- Data Collaboration use-cases - Enabling collaboration across boundaries.e.g. two or more banks, who need to know the results of analysis e.g. using AI, but who cannot permit any participant to see their data or model. This AI Clean Room approach can enable new partnerships and cooperation that was previously impossible.
- Applications that need to protect their code from malicious readers when the code is intellectual property
For example I might have a database with transparent encryption for data storage, TLS keys in memory processing data in transit to and from an app, or keys used locally in an SDK to encrypt fields in an application?
When an application runs inside a TEE, the boundary of trust becomes the hardware isolation mechanism plus the attestation mechanism used to prove trust before injecting secrets e.g. directly from a HSM. Confidential Computing and specifically Anjuna Seaglass can eliminate exposure of keys, creds, configs etc to humans from before boot to runtime, and in use.
In traditional applications, keys may be protected by a HSM, or, as is often the case, they may be placed at boot time into environment variables to be picked up by an application. For example, a default nginx application will boot up, pull a TLS private key from a file. Per nginx documentation, “The server certificate is a public entity. It is sent to every client that connects to the server. The private key is a secure entity and should be stored in a file with restricted access, however, it must be readable by nginx’s master process. The private key may alternately be stored in the same file as the certificate”.This is terribly insecure way to handle a private key - in a file - in the clear - relying on access controls set by admins. Theft of this key will result in decryption of TLS traffic or impersonation of an nginx server to capture decrypted traffic for an attacker. Even if the key is protected by a HSM, the token to access the HSM to retrieve or initiate decryption of the key into memory is vulnerable to an attacker and a vector of attack. This is a fundamental “secret zero” problem - how do you protect secrets that must be available at boot without exposing them in the clear?
With Anjuna and confidential computing, this is a comprehensively solved problem as follows - and all handled automatically with strong audit by Anjuna Seaglass.
- When the application is built, strong cryptographic hashes (SHA-384 measurements) are taken of software components including Bios, GUID, Partition, bootloader, application, application code (of the full container), and initial memory state, and configurations. This is what we expect to run, and nothing else. No unauthorized code, no untrusted components - nothing but what we build. This is the enclave image container build phase. With the Anjuna CLI its “Anjuna build <target>.
- When the image is taken to the runtime environment, e.g. cloud or datacenter to run (using the Anjuna CLI Anjuna run <image>) the TEE is instructed to re-measure the enclave image file before execution. An immutable hardware opcode is executed and essentially re-measures the code presented to it - outputting its PCR registers of what has been presented. This is digitally signed with a strong private key which enable this attestation report to be verified by independent roots of trust to the infrastructure operator.
- If anything has changed, these measurements will fail to match, and boot can be aborted as we do no trust the software. If the opcodes also do not produce what we expect for the infrastructure measurements e.g. CPU firmware/microcode patch level, boot can be aborted.
- If it all match, we know we are a) running on a trustworthy TEE with strong isolation, and b) the code we are booting is trusted - what we build, and only what we built has been verified by the hardware itself.
- The combination of the signed report and the measurements presents a unique and trusted identity for the workload as booted. This identity and report can be verified by attestation aware systems. Anjuna’s policy manager (APM) is an attestation aware endpoint that bridges to cryptographic infrastructure (e.g. KMS, IAM) to retrieve encrypted keys or secrets that can be presented to the TEE directly - end to end. APM runs in a TEE itself. The result is that once trust is verified, keys and secrets can be securely presented into the TEE. As Anjuna’s runtime inside the TEE can present these secrets decrypted for the application to pick up.
To nginx running in the TEE, it sees its private key in the file system as normal - but its been placed there after trust was established, and end-to-end secured from APM to the runtime. A powerful key exchange mechanism enables this to be automated.
The result is both trust verification of workload, secure isolation of it at runtime, and secure injection of secrets without exposing them to the CI/CD, humans, or attackers.
Confidential computing has unique trust and isolation properties that uniquely enable:
- Data Collaboration across trust domains. Imagine a bank that wants to share data with another bank to analyze market risk. Privacy and breach risks would be a showstopper in many cases. However, with confidential computing, a secure environment can be created that a) allows data to be encrypted to it exclusively, AI models to be presented to it, and c) results can emerge without human access or unauthorised software observing the process. This “AI Clean Room approach” can enable new partnerships and cooperation that was previously impossible - and without resorting to synthetic data, anonymization, creating legal clearing houses and so on.
- Digital Rights Management for code and data when shared with 3rd parties. You no longer have to trust a partner to limit use of code and data - it can be placed in a Trusted Execution Environment. Sensitive trading insights that have value when private and no value when public can be utilized in new ways to monetize its value with privacy.
- Applications where no human access needs to be guaranteed, not just assumed. For example, highly sensitive data feeds during a data transfer from one sovereign domain to another for processing but without exposing the data to IT, CSPs, Operation, admins. A strong privacy regulation may mandate “equivalence of processing” to the originator of the data with strong evidence for compliance. This often requires guarantee of no human access, no unauthorized code or data access while enabling results to be derived, e.g. for fraud detection. Confidential Computing creates the foundations to enable this.
- Solving “secret zero”. When applications are first booted, traditionally, credentials need to be sent to them in the clear. For example, an initial decryption key or access token to a secrets manager to retrieve more secrets or unlock files. Secret zero is often placed in environment variables, in files, or even sometimes in code and containers (a very risky practice!). In confidential computing, there is a different process.
Yes. Anjuna Seaglass integrates with Kubernetes and container platforms, so you can run pods securely inside Confidential Containers. The main implication is stronger isolation: workloads are protected from operators, infrastructure, and cloud providers. Day-to-day Kubernetes workflows remain the same.
Low to none for most applications, especially memory-bound processes. Typically, processing overhead is in the low single-digit percentage range and can be zero for memory based applications. For most workloads, the cost/performance impact is negligible compared to the security benefit. Workloads generally see minimal slowdown thanks to hardware acceleration. Unlike classical security technologies that intercede with applications to e.g. encrypt data in software or interface APIs, confidential computing uses on-chip hardware accelerators that are independent for memory protection. This means low impact, high security. Confidential computing can also scale horizontally with e.g. Kubernetes to meet a performance goal. AWS Nitro Enclaves VSOCK connection can add some overhead for heavy I/O bound applications. Anjuna can work with customers on specific cases to advise.
There are cases where using CC will improve the application performance. Two such examples:
- Bringing the key caches closer to the clients in the cloud, instead of having to reach to an on-prem KMS (in a mainframe, for example) for each transaction.
- Applications that do extra activity to obfuscate their memory. Running the application in a TEE removes that need.
- Applications that need keys or prime numbers or other sensitive data for operation can pre-generate them and keep them ready for use - for example. Instead of limiting exposure, TEE’s
Confidential Computing with Anjuna Seaglass scales just like current orchestration infrastructure—across Kubernetes clusters, and clouds. It is multi-cloud and hybrid by design, so you can add Confidential Computing capacity on demand without re-architecting.
Deploy workloads through Anjuna Seaglass into Confidential Containers. No application changes are required. Existing CI/CD, container images, and orchestration tools continue to work—Anjuna Seaglass handles the confidential runtime, trust, and scale with Kubernetes. It requires just 1 line of change to a spec, or adding a couple of additional script lines to a CI/CD.
For the most part none. For the maximum benefit, we recommend using the trust mechanisms to reduce dependence on secrets being present in traditional CI/CD processes like late binding secrets inserts into files pre-boot, or use of environment variables. Secret management for initial secrets can be simplified and use attestation to prove trust before secrets are presented. This may be a small change to an app launch process with a modification to the CI/CD or Terraform etc. We provide templates for this to make it very easy.
- Anjuna Seaglass on average reduces time to market for products requiring strong security by over 90%.
- With regard to MITRE Attacks, 77 of the 185 top Attacks are eliminated and no longer possible using Confidential computing due to the hardware isolation and trust properties.
- Compliance controls that must be implemented to isolate workloads, for example, the gamut of controls for PCI DSS requirements 3 and 4 can be simplified by using the hardware abstraction of risk controls to confidential computing. This enables isolation and segment at the compute layer vs the network segmentation layer. This can reduce regulatory complexity, and streamline compliance, especially for scaled kubernetes applications processing cardholder data.
- Specifically in PCI DSS, Confidential computing can ensure the entity (organization under compliance) is confined to the TEE itself which is the only entity with access to a) ephemeral keys protecting workloads in operation and b) is the only entity capable of possessing and using cleartext keys to decrypt data during operation and c) who’s access to data is limited and confined by the hardware isolation boundary. This can be evidenced by the attestation report.
- Controls requirements for SOC2, ISO27001, HIPAA, GDPR etc can be simplified for controls focused on Privacy of Processing by virtue of proof of isolation, and attested code with limited scope of data access and use.
- Replace software tools used to limit root access, insider threat, to enable zero trust and to reduce data exposure, reducing the stack of tooling
- Instead of relying on synthetic data for AI training or, confidential computing instead anonymizes the whole computation allowing live data to be used but in a secure and isolated fashion. This can reduce the need for data minimization tools (anonymization, k-anonymization, synthetic data creation, masking etc), especially for AI processes and analytics over large semi-structured, and unstructured data.
Agentic workflows involve autonomous AI agents that make decisions, call external APIs, and often handle sensitive data. The challenge is that these agents have dynamic behavior and may run untrusted or evolving code. Traditional software controls (ACLs, IAM, encryption at rest/in transit) can’t guarantee protection once data or code is actively being used. Least of all, Agents may operate with credentials to interact with human interfaces in the real world, where theft or abuse is a substantial liability and likely point of attack. Most “Vibe coding” tools today ignore all security best practices, stuffing credentials, private keys, API keys, and tokens into variables and files, or storing them in an unknown back end. Confidential computing can mitigate these exposures with strong hardware-assisted guardrails for protection, as an “Agent Shield”.
Confidential Computing fills this gap by:
- Isolating agents at runtime – Each agent (or set of cooperating agents) runs inside a Trusted Execution Environment (TEE), preventing the host, hypervisor, or cloud operator from tampering or spying.
- Protecting sensitive memory state – Data, prompts, intermediate results, and model weights remain encrypted and isolated while the agent processes them.
- Providing attestation – Agents can prove they are running trusted code before they’re given access to keys, APIs, or private data. This prevents rogue or mutated agents from being onboarded.
- Enabling multi-party trust – Different organizations can contribute data or models into the same workflow without exposing them, since processing happens inside a neutral hardware-enforced enclave.
In practice, this means agentic workflows gain a hardware trust anchor: sensitive decisions, data exchanges, and even self-modifying logic can operate safely without exposing vulnerabilities to the infrastructure layer.
Without Anjuna AWS Nitro Enclaves, substantial effort is required to design, port and migrate applications to AWS Nitro Enclaves. The application needs to be split into trusted and undusted components, integration with EKS, KMS or HSM stacks is required, as is the trust infrastructure. Storage, networking and protocol management must be built from the ground up. Add in scale, monitoring, debugging and there is a lot of undifferentiated, heavy lifting using low-level SDKs and kernel-level engineering.
See the following blog for details. While written from a SaaS context, this blog shows the required activity related to Nitro Enclaves in more detail and explains how Anjuna Seaglass simplifies the process and reduces effort.
AWS Nitro Enclaves are available on almost all instance types and are proprietary to AWS. Anjuna Seaglass supports AWS Nitro Enclaves, Azure AMD-SEV-SNP, GCP AMD-SEV-SNP, and on premise AMD-SEV-SNP. Yes, they are both in the cloud and in data centers already in many cases.
At an industry level, AMD EPYC (SEV-SNP), Intel Xeon (TDX) CPUs, and NVIDIA H100 GPUs support confidential computing. Any server acquired in the last 3-4 years will likely have the capability. A software stack like Anjuna is required to use it without complex development and supports major clouds and on-premise deployments.
NVIDIA H100 GPU and Intel TDX support at present requires professional services assistance from Anjuna for bare metal implementation. Please contact Anjuna for details.
Confidential Computing shifts trust from software controls, which can be bypassed, to hardware-enforced isolation. This means your code and data are only accessible to your application, not accessible by cloud operators, admins, or attackers. This can be proved with cryptographic attestation, enabling secure collaboration and compliance with confidence.
Anjuna Northstar is a software solution, deployed into a customer’s cloud, that enables end users across the customer and partners to create collaborative confidential AI clean rooms and agree on policy driven collaboration workflows to train models without exposing live data and then quickly move a model to production. As a powerful GUI driven AIOps tool, it can be used by AI analysts and data scientists who want a clean, powerful sandbox environment with hardware isolation to drive rapid iteration of models and enable use of rich data. Northstar quickly meets data-sharing demands, reducing setup time by up to 90% compared to traditional clean rooms. A bank looking to optimize a model for analysis of fraud or AML data can quickly partner with a provider, overcome the friction and delays from concern over model or data sharing, and collaborate on rich data with privacy of processing guarantees.
Anjuna’s official product documentation can be found here.
Running with Anjuna has the following benefits:
- No need to recode the application.
- Native support for all major cloud providers and K8s architectures.
- 24/7 Confidential Computing expert support.
Yes. The Anjuna Seaglass Platform allows customers to run any application (third party or homegrown) and protect it on Confidential Computing, without any source code modifications, and without the need to rebuild the application.
The Anjuna platform is not limited to specific applications. Customers run applications ranging from 3rd party container apps, powerful secrets managers, cryptographic systems, AI models including LLM's, Kubernetes Applications, Microservices, ML platforms (e.g. NVIDIA Triton), Data Processors, API Scanners, ETLs, - anything where sensitive data and code needs stronger protection and isolation from insiders, attackers, privileged users and malware.
Yes. Any monitoring agent (DataDog, CloudWatch, Fluentd…) that you can run inside your container will be working in the same way from inside the Confidential Container.
Yes. Application logs and other debugging options are still available while running an application inside a Confidential Computing enclave. Anjuna team can assist with any enclave configuration needed for specific debugging or logging options.
Yes. Licenses can be purchased from Anjuna, its partners or from Cloud marketplaces where available. Free Trials are also available from Anjuna, and can be requested from the Anjuna Security website.
Yes - NVIDIA currently offers a mechanism for Intel and AMD processors to front-end confidential GPU functions in the H100/200 and above series processors and cross device attestation helps establish a channel from host to GPU for confidential operations. Future implementations will utilize a hardware memory system connection called TDISP - this is expected in 2026-2027 for on premise, with clouds to follow.
Anjuna has NVIDIA support in Seaglass - contact us for more details.