In this talk, I will discuss the problem of privacy-preserving statistical analysis. I will start with an introduction to _differential privacy_, a key framework in this area. Then, I will present _pointwise maximal leakage_ (PML), a privacy measure that I developed during my PhD studies. PML quantifies the amount of information leaking about a secret when releasing the outcome of a randomized function calculated on the secret. I will draw connections between PML and differential privacy while also highlighting their differences. Additionally, I will discuss an application where private information is sanitized while guaranteeing privacy in the sense of PML. Finally, I will explore open questions, current, and future research directions.
Differential privacy is a formal model of privacy protection that has received sustained attention from the research community, whose work has shown that it is possible to reveal accurate information about a population while rigorously protecting the privacy of its constituents. While DP offers a compelling promise, organizations that choose to adopt it as their privacy standard face a number of challenges doing so.
We present our work on the suitability of the metaphors for aiding
informed decisions of data subjects on sharing their data with
differential privacy (DP) systems and discuss open research
challenges.
Data privacy is an ever important aspect of data analyses. Historically, a plethora of privacy techniques have been introduced to protect data, but few have stood the test of time. From investigating the overlap between big data research, and security and privacy research, I have found that _differential privacy_ presents itself as a promising defender of data privacy.