Skip to yearly menu bar Skip to main content


Invited Talk
in
Workshop: Workshop on Distributed and Private Machine Learning

A Better Bound Gives a Hundred Rounds: Enhanced Privacy Guarantees via f-Divergences

Lalitha Sankar


Abstract:

Abstract: Differential privacy (DP) has come to be accepted as a strong definition and approach to designing privacy mechanisms and assuring privacy guarantees. In practice, variants of differential privacy such as (e,d) and Rényi DP for better utility. In machine learning applications wherein differentially private noise is added to each iteration of stochastic gradient descent (SGD), for a fixed choice of overall DP parameters, privacy deteriorates with each iteration. In this talk, we present a novel way of using information-theoretic methods to tighten the conversion between (e,d)-DP and Rényi-DP where the latter is used in the intermediate steps to add noise to SGD iterations. Compared to the state-of-the-art, we show that our bounds can lead to 100 or more SGD iterations for training deep learning models for the same privacy budget.

Bio: Lalitha Sankar is an Associate Professor in the School of Electrical, Computer, and Energy Engineering at Arizona State University. She received her doctorate from Rutgers University, her masters from the University of Maryland and her Bachelors degree from the Indian Institute of Technology, Bombay. Her research is at the intersection of information theory and learning theory and its applications to identifying meaningful metrics for information privacy and algorithmic fairness. She received the NSF CAREER award in 2014 and currently leads an NSF-and Google-funded effort on using learning techniques to assess COVID-19 exposure risk in a secure and privacy-preserving manner.