Nicolas Kourtellis, Ph.D., Research Scientist, Telefonica Research
End-users, with their IoT and other personal computing devices, produce (big) data continuously collected and analyzed by millions of companies in data centers around the world. This decades-old paradigm of the lifecycle of data, including production, collection, and modeling of data, as well as business decisions driven by data, put increasing pressure to the current infrastructure at hand. However, recent privacy regulations in EU, USA and other parts of the world are forcing the tech industry, including IT platform designers and operators, as well as data-dependent companies, to renegotiate this paradigm and take into consideration user privacy at the design level.
How do we consolidate the need for preserving user privacy, but still allow data-driven companies to remain relevant in the future, and continue extracting value out of these data?
In this talk, I will focus on this seemingly impossible to solve conundrum of conflicting goals. I will dive into the realm of:
- Privacy-preserving machine learning (PPML)
- Report on recent advances in this area of Artificial Intelligence (AI)
- Discuss the major tradeoffs between privacy and machine learning utility of PPML and potential adversarial attacks that can be mounted on such modeling.
- Cover recent, peer-reviewed, published and award-winning technology that I have been building with my colleagues at Telefonica Research Labs in Barcelona. This technology is based on Federated Learning (FL), Differential Privacy (DP), Trusted Execution Environments (TEE), Edge Computing (EC) and detailed system design, engineering and experimentation.
- How FLaaS and PPFL's privacy-preserving design can mitigate such adversarial data modeling attacks without losing utility of model.