Building Secure Systems using Trusted Execution Environments

Anitha Gollamudi


In the first part, I’ll talk about information flow control for distributed Trusted Execution Environments (TEEs). Distributed applications cannot assume that their security policies will be enforced on untrusted hosts.TEEs combined with cryptographic mechanisms enable execution of known code on an untrusted host and the exchange of confidential and authenticated messages with it. TEEs do not, however, establish the trustworthiness of code executing in a TEE. Thus, developing secure applications using TEEs requires specialized expertise and careful auditing. This paper presents DFLATE, a core security calculus for distributed applications with TEEs. DFLATE offers high-level abstractions that reflect both the guarantees and limitations of the underlying security mechanisms they are based on. The accuracy of these abstractions is exhibited by asymmetry between confidentiality and integrity in our formal results: DFLATE enforces a strong form of noninterference for confidentiality, but only a weak form for integrity. This reflects the asymmetry of the security guarantees of a TEE: a malicious host cannot access secrets in the TEE or modify its contents, but they can suppress or manipulate the sequence of its inputs and outputs. Therefore DFLATE cannot protect against the suppression of high-integrity messages, but when these messages are delivered, their contents cannot have been influenced by an attacker. Joint work with Stephen Chong and Owen Arden.

In the second part, I’ll present a mechanism to safely compose the verified and unverified components of an Intel SGX (an instance of TEE) application. Intel SGX applications have small legacy code and are therefore attractive targets for verification. We use F* for developing specifications, code, and proofs; and then safely compile F* code to standalone C code. However, this does not account for all code running within the enclave, which also includes trusted C and assembly code for bootstrapping and for core libraries. Besides, we cannot expect all enclave applications to be rewritten in F*, so we also compile legacy C++ defensively, using variants of /guard that dynamically enforce their safety at runtime.

To reason about enclave security, we thus compose different sorts of code and verification styles, from fine-grained statically-verified F* to dynamically-monitored C++ and custom SGX instructions. This involves two related program semantics: most of the verification is conducted within F* using the target semantics of Kremlin—a fragment of C with a structured memory—whereas SGX features and dynamic checks embedded by defensive C++ compilers require lower-level X64 code, for which we use the verified assembly language for Everest (VALE) and its embedding in F*. Joint work in progress with Cédric Fournet.


Anitha Gollamudi is a 5th year graduate student at Harvard University. Her research involves enforcing strong security guarantees against powerful attackers using Trusted Execution Environments. She has interned at Intel (Chandler, AZ) and Microsoft Research (Cambridge, UK). Prior to grad school, she worked as a compiler developer for GCC and LLVM compiler toolchains. As a new mom she lately realized that raising an infant can be much harder and tiring than verifying software.

Time and Place

Tuesday, February 25, 4:15pm
Gates 463A