We’ve all seen the media coverage for futuristic cyberattacks - like stealing passwords using the sound of your typing noises or inducing bitflips at the hardware level by repeatedly writing to neighboring memory rows.
Memory dumping seems like one of these futuristic attacks: an attacker exfiltrates the memory of a running process and finds plaintext encryption keys or credentials for lateral movement, and it’s game over. Is it realistic?
Unfortunately, yes: CircleCI, a leading CI/CD platform, recently suffered a breach in which the attackers gained access to customer data. Their incident report shows they followed many best practices in security: two-factor authentication (2FA), single sign-on (SSO), and encryption at rest for customer data. But to quote the report:
Though all the data exfiltrated was encrypted at rest, the third-party extracted encryption keys from a running process, enabling them to potentially access the encrypted data.
Of course - a running process will need to use those encryption keys to read or write data if it is encrypted at rest. With that knowledge, the attacker could exfiltrate the keys to the kingdom and steal customer data. The following is a simple representation of the steps that this hacker took to breach the security systems of CircleCI.
How the Attack Unfolds:
These attacks are not unique to any particular application, operating system, or cloud service provider. They can target any system, which is why we see memory dump attacks on every platform: LSASS on Windows, Keychain on MacOS, ssh-agent on Linux, and so on.
- Take the PID (process ID) of your application
- Run `gcore -o memdump.bin $pid`
- Run `strings memdump.bin`
- Pipe the output to grep to find interesting data
Encryption keys, credentials for lateral movement, or even valuable customer data are often just sitting in plaintext in memory. Once attackers gain access to the server, this unprotected memory is the natural target.
Security teams at companies like CircleCI face a tough challenge. You can do everything: 2FA, SSO, RBAC/TBAC/ABAC, MDM/AV, IDS, and SIEMs - a whole alphabet soup of best practices that improve security at the access points. But at the end of the day, if an attacker gets access to a server, it’s all over: they’ll dump the memory and get your data. And as we saw, memory dumping can compromise encryption at rest and in transit since the application needs those keys at runtime.
So how can we truly protect sensitive data, like an encryption key, while the application is using it? Today, we can use confidential computing to encrypt data-in-use using hardware-backed secure enclaves. In a secure enclave, memory is encrypted and cannot be accessed from the outside - even by an attacker with root access or with control of the VM hypervisor. If an application runs in a secure enclave, an attacker cannot dump its memory to get sensitive data.
Confidential computing and secure enclaves already protect apps like your mobile phone’s biometric authentication, but the technology was not commonly used because of the arcane knowledge and engineering complexity required. However, solutions like Anjuna Confidential Computing Platform make it easy to run applications in secure enclaves - without rewriting.
When the memory of a running application is encrypted, an attacker can no longer dump it for easy access to secrets. The entire lifecycle for data is now protected - at rest, in transit, and in use - and for the first time, data is never exposed in plaintext for easy exploitation.
So let’s get away from those castle-and-moat, hard-shell-soft-center security models. It’s now possible to bring the ideas of zero-trust architecture down to the application level: just because an attacker has made it into your server, that doesn’t mean they should be able to read your application’s memory. With confidential computing, it’s possible to eliminate memory dumping threats entirely - and with Anjuna Confidential Computing Platform, it’s realistic to do it today.
To learn more: