talk – Chalmers Security Seminar

Rahul Chatterjee: Perils of Breached Passwords and How to Protect from Them

Billions of passwords are leaked due to data breaches in the last few years and are available online. As users frequently reuse passwords, attacks using these leaked passwords are the most imminent threat to account security nowadays. In this talk, I will talk about how bad these attacks can be and what we can do to protect users. I will show that it is possible to guess 50% of users' passwords in less than a thousand guesses given one of their other passwords. I will then talk about two new password breach alerting services (PBAS) built to privately check if user passwords are vulnerable to such attacks and take action to protect the user accounts. These services however bring forth new threats, and if not designed properly could hurt user’s account security instead of helping. We will go over some deployment conundrums of PBAS and how to navigate them.

Read More ›

talk – Chalmers Security Seminar

Andrei Sabelfeld: Practical Data Access Minimization in Trigger-Action Platforms

Trigger-Action Platforms (TAPs) connect disparate online services and enable users to create automation rules in diverse domains such as smart homes and business productivity. Unfortunately, the current TAP design is flawed from a privacy perspective, since it has unfettered access to sensitive user data. We point out that TAPs suffer from two types of overprivilege: (1) attribute-level, where it has access to more data attributes than it needs for running user-created rules; and (2) token-level, where it has access to more APIs than it needs. To mitigate overprivilege and subsequent privacy concerns we design and implement minTAP, a practical approach to data access minimization in TAPs. Our key insight is that the semantics of a user-created automation rule implicitly specifies the minimal amount of data it needs. This allows minTAP to leverage language-based data minimization to apply the principle of least-privilege by releasing only the necessary attributes of user data to the TAP. Using real user-created rules on the popular IFTTT TAP, we demonstrate that minTAP on average sanitizes 3.7 sensitive data attributes per rule, with tolerable performance overhead and without modifying IFTTT. Joint work with Yunang Chen, Mohannad Alhanahnah, Rahul Chatterjee, and Earlence Fernandes, to appear in USENIX Security 2022.

Read More ›

talk – Chalmers Security Seminar

Seyed Mohammad Mehdi Ahmadpanah's Licentiate: Securing Software in the Presence of Third-Party Modules

Modular programming is a key concept in software development where the program consists of code modules that are designed and implemented independently. This approach accelerates the development process and enhances scalability of the final product. Modules, however, are often written by third parties, aggravating security concerns such as stealing confidential information, tampering with sensitive data, and executing malicious code. Trigger-Action Platforms (TAPs) are concrete examples of employing modular programming. Any user can develop TAP applications by connecting trigger and action services, and publish them on public repositories. In the presence of malicious application makers, users cannot trust applications written by third parties, which can threaten users’ and platform’s security. We present SandTrap, a novel runtime monitor for JavaScript that can be used to securely integrate third-party applications. SandTrap enforces fine-grained access control policies at the levels of module, API, value, and context. We instantiate SandTrap to IFTTT, Zapier, and Node-RED, three popular JavaScript-driven TAPs, and illustrate how it enforces various policies on a set of benchmarks while incurring a tolerable runtime overhead. We also prove soundness and transparency of the monitoring framework on an essential model of Node-RED. Furthermore, nontransitive policies have been recently introduced as a natural fit for coarse-grained information-flow control where labels are specified at the level of modules. The flow relation does not need to be transitive, resulting in nonstandard noninterference and enforcement mechanism. We develop a lattice encoding to prove that nontransitive policies can be reduced to classical transitive policies. We also devise a lightweight program transformation that leverages standard flow-sensitive information-flow analyses to enforce nontransitive policies more permissively.

Read More ›

talk – Chalmers Security Seminar

Simone Fischer-Hübner's Honorary Doctorate Lecture: Challenges of User-centric Privacy Enhancing Technologies

The GDPR promotes the principle of Privacy by Design and Default, acknowledging that the individual’s privacy is best protected if privacy law is complemented by privacy enhancing technologies (PETs). While technically advanced PETs have been researched and developed in the last four decades, challenges remain for making PETs and their configurations usable. In particular, PETs are often based on “crypto-magic” operations that are counterintuitive and for which no real-world analogies can be easily found. This presentation presents human computer interaction challenges, end user perceptions and requirements for the design and configurations of PETs in compliance with the GDPR that we explored in recent European research projects. The presentation discusses cultural privacy aspects impacting the users’ preferences and trust in PETs, and it shows that users with technical knowledge may especially encounter challenges in understanding and trusting the protection claims of PETs. It concludes that for this reason, PET user interfaces should not only have to build on real-world analogies but also need to cater for digital world analogies that may impact the users’ understanding of PETs.

Read More ›

talk – Chalmers Security Seminar

Boel Nelson's PhD Defense: Differential Privacy — A Balancing Act

Data privacy is an ever important aspect of data analyses. Historically, a plethora of privacy techniques have been introduced to protect data, but few have stood the test of time. From investigating the overlap between big data research, and security and privacy research, I have found that _differential privacy_ presents itself as a promising defender of data privacy. Differential privacy is a rigorous, mathematical notion of privacy. Nevertheless, privacy comes at a cost. In order to achieve differential privacy, we need to introduce some form of inaccuracy (i.e. error) to our analyses. Hence, practitioners need to engage in a _balancing act_ between accuracy and privacy when adopting differential privacy. As a consequence, understanding this accuracy/privacy trade-off is vital to being able to use differential privacy in real data analyses. In this thesis, I aim to bridge the gap between differential privacy in theory, and differential privacy in practice. Most notably, I aim to convey a better understanding of the accuracy/privacy trade-off, by 1) implementing tools to tweak accuracy/privacy in a real use case, 2) presenting a methodology for empirically predicting error, and 3) systematizing and analyzing known accuracy improvement techniques for differentially private algorithms. Additionally, I also put differential privacy into context by investigating how it can be applied in the automotive domain. Using the automotive domain as an example, I introduce the main challenges that constitutes the balancing act, and provide advice for moving forward.

Read More ›