News Archive

Chalmers Security Seminar › Adwait Nadkarni — Building Practical Security Systems for the Post-app Smart Home
Modern commodity computing platforms such as smartphones (e.g., Android and iOS) and smart home systems (e.g., SmartThings and NEST) provide programmable interfaces for third-party integration, enabling popular third-party functionality that is often manifested in applications, or apps. Thus, for the last decade, designing systems to analyze mobile apps for vulnerabilities or unwanted behavior has been a major research focus within the security community. Leveraging the lessons and techniques learned from mobile app analysis, researchers have developed similar systems to evaluate the security, safety, and privacy of smart homes by inspecting IoT apps developed for platforms such as SmartThings. However, emerging characteristics of smart home ecosystems indicate the need to move away from the approach of IoT app analysis, as IoT apps may not be representative of the home automation in real homes, and moreover, be unavailable for analysis or instrumentation in the near future. In this talk, I will describe the challenges for research in the backdrop of the unsuitability of IoT apps for practical security analysis, and motivate alternate research directions. First, I will motivate the need to develop an alternative to IoT apps that is representative of automation in the wild, in order to enable a practical artifact for building and evaluating security systems for smart homes. To this end, I will describe Helion, a system that leverages the "user-driven" nature of home automation to generate natural home automation scenarios, i.e., realistic event sequences that are closely aligned with the real home automation usage in end-user homes, which are then used for several critical tasks in building and evaluating security systems. Second, I will motivate the need to improve the state of security analysis of mobile companion apps, which often form the weakest link in IoT ecosystems, by systematically and rigorously evaluating the security analyses targeted at them. To this end, I will describe how mutation testing can be leveraged for empirically evaluating static program analysis-based security systems. Our research in this direction has led to two mutation frameworks, and the discovery of critical flaws in leading tools such as FlowDroid, CryptoGuard, Argus, and Coverity that affect the reliability and soundness of their analysis. Finally, I will conclude the talk by describing the lessons learned from our work, as well as by highlighting challenges and opportunities for future research in home automation security.

Read More ›
Chalmers Security Seminar › Nachiappan Valliappan — Retrofitting Impure Languages with Static Information-Flow Control
How can we write secure programs in a pervasively effectful language? In a “pure” language, such as Haskell, effects performed by a program are recorded explicitly in its type. Thus, a function of type `Int -> Int` is just that: a function that receives an integer and returns an integer. It does not perform side effects such as writing to or reading from a channel. In an impure language, such as ML, however, a function of type `Int -> Int` may read, write, or even order a burrito. It’s impossible to assert that a function is secure from its type alone, since it may be performing invisible side effects that may leak a secret. For this reason, standard approaches to enforcing static Information-Flow Control (IFC)—be it fine-grained or coarse-grained—are not readily applicable to impure languages since they require a complete reimplementation of the compiler and significant ingenuity from the programmer to restructure programs to conform to the new enforcement paradigm. So should we all just switch to Haskell? While I would never discourage anybody from doing that, this talk is about developing the foundations for retrofitting impure languages with static IFC at a much lower cost. In a recent result, Choudhury and Krishnaswami [1] show how purity can be recovered in an impure language by using capabilities and a special “modal” type operator. In this informal talk, I’ll show how their observations, in combination with recent advances in formulating modal types, pave the way towards the goal of this work.

Read More ›
Chalmers Security Seminar › Gerardo Schneider — Is Privacy by Construction Possible?
Finding suitable ways to handle personal data in conformance with the law is challenging. The European General Data Protection Regulation (GDPR), enforced since May 2018, makes it mandatory to citizens and companies to comply with the privacy requirements set in the regulation. For existing systems the challenge is to be able to show evidence that they are already complying with the GDPR, or otherwise to work towards compliance by modifying their systems and procedures, or alternatively reprogramming their systems in order to pass the eventual controls. For those starting new projects the advice is to take privacy into consideration since the very beginning, already at design time. This has been known as Privacy by Design (PbD). The main question is how much privacy can you effectively achieve by using PbD, and in particular whether it is possible to achieve Privacy by Construction. In this short non-technical talk I will give my personal opinion on issues related to the ambition of achieving Privacy by Construction.

Read More ›
Chalmers Security Seminar › Deepak Garg — CoVault: Facilitating highly secure, high-stakes data analytics
The recent Covid-19 pandemic has shown that individuals’ whereabouts and social contact data can be very effective in understanding a new infectious disease in a timely manner. However, due to the privacy-sensitive nature of such data, liberal societies have hesitated to even collect such data. Part of the problem is the lack of technology for securely storing and querying such data in the presence of extremely strong adversaries (e.g., state-sponsored adversaries). This talk will present the design of CoVault, a work-in-progress system for securely storing and querying data under a very strong threat model that doesn’t place trust in any one entity or authority, and includes the complete compromise of all CPUs of a specific manufacturer, as well as many common side channel attacks. This threat model transcends prior work on database security. Technically, CoVault relies on state-of-the-art trusted execution environments, secret sharing and secure multi-party computation. While this combination is expensive in terms of computational power, we believe that this design point is worth exploring for applications like epidemic studies where security is non-negotiable, but the returns for society are extremely high.

Read More ›
Chalmers Security Seminar › Iván Arce: Buy the ticket, take the ride: 25 years in infosec
"In the 1990s the information security (infosec) community was not what it is today" is such an obvious statement that it shouldn't be used to start any abstract. However, if we further qualify and describe more precisely all the different aspects in which the 1990s and today's infosec communities differ, the way academia, industry and practitioners evolved over time, the co-evolution of security attacks & defenses, and the fundamental problems that remain unsolved, we may start to have the outline of an interesting talk. Having spent almost 30 years in the field, Ivan intends to provide his insights — opinions informed by experience — about the information security discipline and its young history: Where we are, how did we get here, and what we could look for in the future of our field. If you are interested in a career in infosec this may be a good opportunity to hear the perspective of a veteran in the field. This will not be a technical talk and it will not be narrowly focused on a specific topic but the speaker will not shy away from technical discussion.

Read More ›
Chalmers Security Seminar › Rahul Chatterjee: Perils of Breached Passwords and How to Protect from Them
Billions of passwords are leaked due to data breaches in the last few years and are available online. As users frequently reuse passwords, attacks using these leaked passwords are the most imminent threat to account security nowadays. In this talk, I will talk about how bad these attacks can be and what we can do to protect users. I will show that it is possible to guess 50% of users' passwords in less than a thousand guesses given one of their other passwords. I will then talk about two new password breach alerting services (PBAS) built to privately check if user passwords are vulnerable to such attacks and take action to protect the user accounts. These services however bring forth new threats, and if not designed properly could hurt user’s account security instead of helping. We will go over some deployment conundrums of PBAS and how to navigate them.

Read More ›
Chalmers Security Seminar › Andrei Sabelfeld: Practical Data Access Minimization in Trigger-Action Platforms
Trigger-Action Platforms (TAPs) connect disparate online services and enable users to create automation rules in diverse domains such as smart homes and business productivity. Unfortunately, the current TAP design is flawed from a privacy perspective, since it has unfettered access to sensitive user data. We point out that TAPs suffer from two types of overprivilege: (1) attribute-level, where it has access to more data attributes than it needs for running user-created rules; and (2) token-level, where it has access to more APIs than it needs. To mitigate overprivilege and subsequent privacy concerns we design and implement minTAP, a practical approach to data access minimization in TAPs. Our key insight is that the semantics of a user-created automation rule implicitly specifies the minimal amount of data it needs. This allows minTAP to leverage language-based data minimization to apply the principle of least-privilege by releasing only the necessary attributes of user data to the TAP. Using real user-created rules on the popular IFTTT TAP, we demonstrate that minTAP on average sanitizes 3.7 sensitive data attributes per rule, with tolerable performance overhead and without modifying IFTTT. Joint work with Yunang Chen, Mohannad Alhanahnah, Rahul Chatterjee, and Earlence Fernandes, to appear in USENIX Security 2022.

Read More ›
Chalmers Security Seminar › Seyed Mohammad Mehdi Ahmadpanah's Licentiate: Securing Software in the Presence of Third-Party Modules
Modular programming is a key concept in software development where the program consists of code modules that are designed and implemented independently. This approach accelerates the development process and enhances scalability of the final product. Modules, however, are often written by third parties, aggravating security concerns such as stealing confidential information, tampering with sensitive data, and executing malicious code. Trigger-Action Platforms (TAPs) are concrete examples of employing modular programming. Any user can develop TAP applications by connecting trigger and action services, and publish them on public repositories. In the presence of malicious application makers, users cannot trust applications written by third parties, which can threaten users’ and platform’s security. We present SandTrap, a novel runtime monitor for JavaScript that can be used to securely integrate third-party applications. SandTrap enforces fine-grained access control policies at the levels of module, API, value, and context. We instantiate SandTrap to IFTTT, Zapier, and Node-RED, three popular JavaScript-driven TAPs, and illustrate how it enforces various policies on a set of benchmarks while incurring a tolerable runtime overhead. We also prove soundness and transparency of the monitoring framework on an essential model of Node-RED. Furthermore, nontransitive policies have been recently introduced as a natural fit for coarse-grained information-flow control where labels are specified at the level of modules. The flow relation does not need to be transitive, resulting in nonstandard noninterference and enforcement mechanism. We develop a lattice encoding to prove that nontransitive policies can be reduced to classical transitive policies. We also devise a lightweight program transformation that leverages standard flow-sensitive information-flow analyses to enforce nontransitive policies more permissively.

Read More ›
Chalmers Security Seminar › Simone Fischer-Hübner's Honorary Doctorate Lecture: Challenges of User-centric Privacy Enhancing Technologies
The GDPR promotes the principle of Privacy by Design and Default, acknowledging that the individual’s privacy is best protected if privacy law is complemented by privacy enhancing technologies (PETs). While technically advanced PETs have been researched and developed in the last four decades, challenges remain for making PETs and their configurations usable. In particular, PETs are often based on “crypto-magic” operations that are counterintuitive and for which no real-world analogies can be easily found. This presentation presents human computer interaction challenges, end user perceptions and requirements for the design and configurations of PETs in compliance with the GDPR that we explored in recent European research projects. The presentation discusses cultural privacy aspects impacting the users’ preferences and trust in PETs, and it shows that users with technical knowledge may especially encounter challenges in understanding and trusting the protection claims of PETs. It concludes that for this reason, PET user interfaces should not only have to build on real-world analogies but also need to cater for digital world analogies that may impact the users’ understanding of PETs.

Read More ›
Chalmers Security Seminar › Boel Nelson's PhD Defense: Differential Privacy — A Balancing Act
Data privacy is an ever important aspect of data analyses. Historically, a plethora of privacy techniques have been introduced to protect data, but few have stood the test of time. From investigating the overlap between big data research, and security and privacy research, I have found that _differential privacy_ presents itself as a promising defender of data privacy. Differential privacy is a rigorous, mathematical notion of privacy. Nevertheless, privacy comes at a cost. In order to achieve differential privacy, we need to introduce some form of inaccuracy (i.e. error) to our analyses. Hence, practitioners need to engage in a _balancing act_ between accuracy and privacy when adopting differential privacy. As a consequence, understanding this accuracy/privacy trade-off is vital to being able to use differential privacy in real data analyses. In this thesis, I aim to bridge the gap between differential privacy in theory, and differential privacy in practice. Most notably, I aim to convey a better understanding of the accuracy/privacy trade-off, by 1) implementing tools to tweak accuracy/privacy in a real use case, 2) presenting a methodology for empirically predicting error, and 3) systematizing and analyzing known accuracy improvement techniques for differentially private algorithms. Additionally, I also put differential privacy into context by investigating how it can be applied in the automotive domain. Using the automotive domain as an example, I introduce the main challenges that constitutes the balancing act, and provide advice for moving forward.

Read More ›
Chalmers Security Seminar › High-Assurance Cryptography Software in the Spectre Era
High-assurance cryptography leverages methods from program verification and cryptography engineering to deliver efficient cryptographic software with machine-checked proofs of memory safety, functional correctness, provable security, and absence of timing leaks. Traditionally, these guarantees are established under a sequential execution semantics. However, this semantics is not aligned with the behavior of modern processors that make use of speculative execution to improve performance. This mismatch, combined with the high-profile Spectre-style attacks that exploit speculative execution, naturally casts doubts on the robustness of high-assurance cryptography guarantees. In this paper, we dispel these doubts by showing that the benefits of high-assurance cryptography extend to speculative execution, costing only a modest performance overhead. We build atop the Jasmin verification framework an end-to-end approach for proving properties of cryptographic software under speculative execution, and validate our approach experimentally with efficient, functionally correct assembly implementations of ChaCha20 and Poly1305, which are secure against both traditional timing and speculative execution attacks.

Read More ›
Chalmers Security Seminar › A different perspective on libraries for information-flow control
There is a long line of research on how to control information flow in pure programming languages. In Haskell, for instance, the MAC library [Russo 2015] provides IFC primitives that allows programmers to write (statically) secure programs. MAC enforces security by controlling the interaction of an indexed monad for print effects, MAC, and a type for labeling data, Labeled. In this talk, I will present a different point in the design space of IFC libraries which in some sense refines MAC. The starting point will be a pure language where the monad for print effects—think of IO in Haskell restricted to print—keeps track of the output channels within the type. Looking at MAC in this setting, we see that one can safely embed effectful computations into the MAC monad. It appears that in this extended setting the MAC monad is redundant in the sense that we can express its interface in terms of indexed IO and Labeled. Arguably, this refinement yields a library which is: conceptually cleaner; more compositional; and it allows more programs to typecheck. This talk is based on a paper that Alejandro Russo and I are currently preparing for submission.

Read More ›
Chalmers Security Seminar › Fuzz Testing Automotive Systems - Process and Practice
This presentation provides an introduction to fuzz testing of automotive systems with a focus on both process and practical topics. We first discuss the typical automotive development process to better understand where the fuzz testing activity fits into the overall process. We also discuss common practical pitfalls and challenges when performing fuzz testing of automotive systems. Based on this understanding, a few examples of how fuzz testing can be performed in practice, including how to build a fuzz testing environment for automotive systems, are explained. Last, we also explore how to perform continuous fuzz testing using automated tools and virtual ECUs as part of a CI/CD pipeline.

Read More ›
Chalmers Security Seminar › Can we enforce GDPR principles via information flow control?
In this talk, I will present some work in progress on using IFC principles for enforcing GDPR-style privacy principles. Privacy legislation such as the GDPR specifies legal requirements for protecting the private data of individuals but remains vague about how to implement such requirements in practice. Traditional security mechanisms such as cryptography or access control are blunt instruments for this job since they typically cannot distinguish between intended and inappropriate usage of private data. To complement them, I propose a programming language-based framework that uses IFC mechanisms to enforce privacy principles such as purpose limitation and data minimization. I will start by illustrating how these principles are ultimately about information flow and how they can be integrated in existing IFC frameworks. I’ll then sketch a simple type system for tracking secure and private information flow.

Read More ›
Chalmers Security Seminar › A Quantale of Information
Information flow properties are the semantic cornerstone of a wide range of program transformations, program analyses, and security properties. The variety of information that can be transmitted from inputs to outputs in a deterministic system can be elegantly and very generally captured by representing information as equivalence relations over the sets of possible values, using an equivalence relation on the input domain to model what may be learned, and an equivalence relation on the output to model what may be observed. The set of equivalence relations over a given set of values form a lattice, where the partial order models containment of information, and lattice join models the effect of combining information. This elegant and general structure is sometimes referred to as the lattice of information (Landauer & Redmond CSFW'93). In this work we identify an abstraction of information flow which has not been studied previously, namely disjunctive dependency. We argue that this is both interesting in its own right, providing for example an information flow based semantic model of Chinese-wall policies, and potentially provides increased precision in the application of dependency analysis to computation of quantitative properties. We achieve this via a generalization of the lattice of information to a quantale, a lattice equipped with a tensor operation where the lattice join corresponds to the disjunctive combination of information, and tensor corresponds to conjunctive combination.

Read More ›
Chalmers Security Seminar › On the Evolution of IT Security
Chalmers Security Seminar › SoK: Chasing Accuracy and Privacy, and Catching Both in Differentially Private Histogram Publication
Chalmers Security Seminar › Towards new fuzzing frontiers: exploring the boundaries of testing
Chalmers Security Seminar › Practical secure compilation using WebAssembly
Chalmers Security Seminar › Liquid Information Flow Control
Chalmers Security Seminar › w0RLd w1dE W3b - The dangers of web security inconsistencies
Chalmers Security Seminar › Let's not make a fuzz about it
Join us to hear more about this work-in-progress!

Read More ›
Chalmers Security Seminar › HMAC and 'Secure Preferences': Revisiting Chromium-Based Browsers Security
How secure is your browser?

Read More ›
Chalmers Security Seminar › Security Assurance Cases for Road Vehicles: an Industry Perspective
How can security be assured in safety-cricial domains?

Read More ›
Chalmers Security Seminar › An Overview of Vehicular Security
Want to know more about the current and future challenges in vehicular security? Join us!

Read More ›
Chalmers Security Seminar › Decentralized Action Integrity for Trigger-Action Platforms
Securing OAuth tokens through security principles

Read More ›
Chalmers Security Seminar › When Good Components Go Bad: Formally Secure Compilation Despite Dynamic Compromise
We propose a new formal criterion for evaluating secure compartmentalization schemes for unsafe languages like C and C++, expressing end-to-end security guarantees for software components that may become compromised after encountering undefined behavior---for example, by accessing an array out of bounds.

Read More ›
Chalmers Security Seminar › Risk Analysis of Privacy Policies
In this talk, I present an approach to enhance informed consent for the processing of personal data. The approach relies on a privacy policy language used to express, compare and analyze privacy policies.

Read More ›
Chalmers Security Seminar › The Simplest Multi-key Linearly Homomorphic Signature Scheme
We consider the problem of outsourcing computation on data authenticated by different users. Our aim is to describe and implement the simplest possible solution to provide data integrity in cloud-based scenarios.

Read More ›
Chalmers Security Seminar › The Rush Dilemma: Attacking and Repairing Smart Contracts on Forking Blockchains
We investigate the security of smart contracts within a blockchain that can fork (as Bitcoin and Ethereum). In particular, we focus on multi-party computation (MPC) protocols run on-chain with the aid of smart contracts, and observe that honest players face the following dilemma: Should I rush sending protocol's messages based on the current view of the blockchain, or rather wait that a message is confirmed on the chain before sending the next one?

Read More ›
Chalmers Security Seminar › SAID: Reshaping Signal into an Identity-Based Asynchronous Messaging Protocol with Authenticated Ratcheting
As messaging applications are becoming increasingly popular, it is of utmost importance to analyze their security and mitigate existing weaknesses. This paper focuses on one of the most acclaimed messaging applications: Signal.

Read More ›
Chalmers Security Seminar › Trusted Execution Environments for Privacy-preserving Cloud Applications
An overview of popular trusted execution environments (TEEs), with special emphasis on Intel's SGX.

Read More ›
Chalmers Security Seminar › CLIO: Cryptographically Secure Information Flow Control on Key-Value Stores
Cryptography can in principle be used to protect users' data when stored or transmitted, but in practice is error-prone and can potentially result in a violation of a user's security concerns.

Read More ›
Chalmers Security Seminar › Historical Analyses of the Client-Side Web Security and How to tell people they have an issue
To better understand how the eco system evolved, we conducted a historical study of the last 20 years of the Web using data from the Internet Archive...

Read More ›
Chalmers Security Seminar › Recent work on probabilistic programming languages
In the first half of the talk, we will describe a semantics for these languages based on Type-2 ...

Read More ›
Chalmers Security Seminar › Privacy and security threat modeling: current research directions
Threat analysis is the cornerstone of security-by-design and privacy-by-design approaches for building more secure and privacy-friendly software systems

Read More ›
Chalmers Security Seminar › Security and Privacy on Medical Devices
The new generation of Implantable Medical Devices (IMDs) is a reality but the security threats mainly linked to the inclusion of wireless connectivity seem to have not received the adequate attention

Read More ›
Chalmers Security Seminar › ZKBoo: Faster Zero-Knowledge for Boolean Circuits
In this talk we describe ZKBoo, a proposal for practically efficient zero-knowledge arguments especially tailored for Boolean circuits and report on a proof-of-concept implementation.

Read More ›
Chalmers Security Seminar › Enhancing the COWL W3C Standard
Web applications are often composed by resources such as JavaScript written, and provided, by different parties. This reuse leads to questions concerning security, and whether one can trust that third-party code will not leak users’ sensitive information. As it stands today, these concerns are well-founded.

Read More ›
Chalmers Security Seminar › Selene: Voting with Transparent Verification and Coercion Mitigation
In conventional cryptographic E2E verification schemes, voters are provided with encrypted ballots that enable them to confirm that their vote is accurately included in the tally. Technically this is very appealing, but ...

Read More ›
Chalmers Security Seminar › Verification of differential private computationss
Differential privacy is a statistical notion of privacy which achieves compelling trade-offs between input privacy and accuracy (of outputs). Differential privacy is also an attractive target for verification...

Read More ›
Chalmers Security Seminar › Privacy engineering: from the building blocks to the system
This talk will be about privacy engineering, a field mainly concerned with techniques, methods, and tools to systematically take into account and address privacy issues when building a system.

Read More ›
Chalmers Security Seminar › Frozen Realms: draft standard support for safer JavaScript plugins
Support ultra-fine-grain protection domains in JavaScript. Minimizing standardization, development, explanation, and runtime costs.

Read More ›
Chalmers Security Seminar › Architectural requirements for language-level control of external timing channels
A promising new approach to controlling timing channels relies on distinguishing between the direct timing dependencies that are visible at the program control flow level, and the indirect timing dependencies that typically have architectural nature.

Read More ›
Chalmers Security Seminar › Security of login pages on the Web: who else can know your password?
Most people with an online presence these days, store large amounts of information about their lives in online web services: e-mails, pictures, medical information, ... To prevent unauthorised access to their personal

Read More ›
Chalmers Security Seminar › Program behavior-based fuzzing and vulnerability discovery
Mutational fuzzing is a powerful tool to detect vulnerabilities in software...

Read More ›
Chalmers Security Seminar › Two Can Keep a Secret, If One of Them Uses Haskell
For several decades, researchers from different communities have independently focused on protecting confidentiality of data. Two distinct technologies have emerged for such purposes: Mandatory Access Control (MAC) and Information-Flow Control (IFC)—the former belonging to operating systems (OS) research, while the latter to the programming languages community.

Read More ›
Chalmers Security Seminar › Recent Breakthroughs in Obfuscation
This talk is supposed to give an overview of the state of the art in the area of Homomorphic Encryption (HE) and Multi-Linear Maps (MLM). The final section of the talk will deal with the definition and application of indistinguishable Obfuscation.

Read More ›
Chalmers Security Seminar › Formal Security Analysis of Mobile and Web Applications
In this talk, I will present two ongoing projects on the formal verification of security properties for mobile and web applications.

Read More ›
Chalmers Security Seminar › Anonymization of sparse multidimensional data
I will sketch the techniques that anonymize data through generalization, record splitting (disassociation) and algorithms that work on tree-structured data.

Read More ›
Chalmers Security Seminar › Daniel Hausknecht's Licentiate presentation
In this talk, I will present recent work on bypassing browser security controls via a novel form of code reuse, and discuss a lightweight static analysis to detect this class of vulnerabilities.

Read More ›
Chalmers Security Seminar › Towards more secure and usable text passwords
Is pa$$w0rd1 a good password or a bad one? For several years, we've been studying how to help users create passwords that are hard for attackers to crack, but are still easy for users to remember and use.

Read More ›
Chalmers Security Seminar › Establishing and Maintaining Root of Trust on Commodity Computer Systems
Establishing root of trust assures the user that either the system is in a malware-free state in which the trustworthy-program boot takes place or the presence of malware is discovered, with high probability. Obtaining such assurance is challenging because malware can survive in system state across repeated secure- and trusted-boot operations.

Read More ›