On Extractability (aka Differing-Inputs) Obfuscation

Elette Boyle

Abstract:

Program obfuscation serves to "scramble" a computer program, hiding its implementation details while preserving its functionality. Unfortunately, the "dream" notion of security, guaranteeing that obfuscated code does not reveal information beyond black-box access to the original program, has historically run into strong impossibility results, and is known to be impossible to achieve for general programs.

Recently, the first plausible candidate of general-purpose obfuscation was presented (Garg et al FOCS 2013) for a relaxed notion of security, known as {\em indistinguishability} obfuscation, which requires only that obfuscations of functionally equivalent programs are indistinguishable.

We initiate the study of the stronger notion of {\em extractability} or "differing-inputs" obfuscation: An extractability obfuscator for a class of algorithms M guarantees that if an efficient attacker A can distinguish between obfuscations of two algorithms M_1, M_2 in M, then A can efficiently recover (given M_1 and M_2) an input on which M_1 and M_2 provide different outputs (Barak et al JACM 2012).

We demonstrate that extractability obfuscation provides several new applications, including obfuscation of Turing machines, indistinguishability-secure functional encryption for an unbounded number of key queries and unbounded message spaces, and a new notion of {\em functional witness encryption}.

We also explore the relation between extractability obfuscation and other cryptographic notions. We show that in special cases, extractability obfuscation is in fact implied by indistinguishability obfuscation. On the other hand, we demonstrate a standoff between certain cryptographic extractability assumptions when security is required against even {\em fixed distributions} of auxiliary input. In particular, we demonstrate efficiently computable distributions of auxiliary input Z, Z' such that (assuming collision-resistant hash functions), either (1) extractability obfuscation for NC^1 w.r.t. Z does not exist, or (2) Succinct Non-Interactive Arguments of Knowledge (SNARKs) w.r.t. Z' do not exist.

Based on joint works with Kai-Min Chung and Rafael Pass.

Time and Place

Tuesday, February 18, 4:15pm
Gates 463A