Oblivious Multi-Party Machine Learning on Trusted Processors
Privacy-preserving multi-party machine learning allows multiple organizations to perform collaborative data analytics while guaranteeing the privacy of their individual datasets. Using trusted SGX-processors for this task yields high performance, but requires a careful selection, adaptation, and implementation of machine-learning algorithms to provably prevent the exploitation of any side channels induced by data-dependent access patterns.
In this talk, I will present our data-oblivious counterparts of several machine learning algorithms including support vector machines, matrix factorization, neural networks and decision trees. These algorithms are designed to access memory without revealing secret information about their input. We use algorithmic techniques as well as platform specific hardware features to ensure that only public information, such as dataset size, is revealed.
I will show that our efficient implementation on Intel Skylake processors scales up to large, realistic datasets, with overheads several orders of magnitude lower than with previous approaches based on advanced cryptographic multi-party computation schemes.
Joint work with Felix Schuster, Cédric Fournet, Sebastian Nowozin, Kapil Vaswani and Manuel Costa from MSR Cambridge and Aastha Mehta from MPI-SWS.
Olya Ohrimenko is a researcher in Constructive Security Group at Microsoft Research, Cambridge, and a research fellow at Darwin College, Cambridge University. Her research interests include privacy, integrity and security issues that emerge in the cloud computing environment. Olya received her Ph.D. degree from Brown University in 2013 and a B.CS. (Hons) degree from The University of Melbourne in 2007.