The threat and severe consequences (financial or otherwise) of ransomware in traditional desktop- and handheld-based computer systems have been well documented in the literature. The same cannot be said for systems comprising constrained, embedded IoT devices used in industrial applications: When it comes to ransomware, the landscape is still largely unexplored. In industrial settings, IoT devices have started being considered for the control of mission-critical systems. A simultaneous or almost-simultaneous ransomware attack on a very large number of devices could prove very disruptive, costly, or outright dangerous. An attack of this nature could for example disrupt the operation of IoT-enabled supply chains, compromise food production by targeting smart agriculture settings, cause unforeseeable consequences to the power grid through compromise of smart metering or electric car charging infrastructure, or even endanger lives by tampering with actuators in factories or transport systems. The CHARIOT EPSRC-funded project aims to devise, design, and prototype methods to prevent, detect, recover from and immunise against ransomware attacks in resource-constrained industrial IoT environments. In this talk I will present the project’s progress to date, as well as some prior work on anomaly detection that led to this research activity at Bristol.
The presentation delves into the critical aspects of QNAME Minimization within the DNS, presenting an in-depth analysis through two key studies. The first segment, "Adoption of QNAME Minimization," presents the main takeaways from "A Second Look at QNAME Minimization" (PAM, 2023). This study provides insights into how QNAME Minimization has been embraced across different platforms and its implications for privacy and performance in DNS lookups. The second segment, "Implementation of QNAME Minimization," introduces new research on fingerprinting DNS resolvers by leveraging the query patterns emerging from QNAME Minimization. This study, currently under submission, aims to shed light on the plethora of implementation approaches to minimizing queries and how these differences can be used to fingerprint resolver software and versions. Through these studies, the presentation aims to foster a deeper understanding of DNS resolver dynamics, highlighting the importance of QNAME Minimization in bolstering DNS privacy.
In this talk, I'll advocate a proof technique for ensuring soundness or security properties of practical languages, which implement interoperability using glue code that mediates interaction between languages after compilation to a common lower-level intermediate representation (IR). This proof technique involves building a _semantic intermediate representation_: a semantic model of source-language types as relations on terms of the lower-level IR. Semantic IRs can be used to guide the design and implementation of sound FFIs and to verify that the IR glue code used to implement conversions ensures type soundness. More interestingly, semantic IRs provide a basis for numerous avenues of future work on the principled design of language interoperability: how to support the inclusion of libraries whose behavior is foreign to the original language, how to prove soundness and security properties that are robust to behaviors (attackers) outside of the semantic IR, and how to develop a compiler IRs and backends that makes it easier to implement and verify sound interoperability for a wide array of source languages.
Content scanning systems employ perceptual hashing algorithms to scan user content for illicit material, such as child pornography or terrorist recruitment flyers. Perceptual hashing algorithms help determine whether two images are visually similar while preserving the privacy of the input images. Several efforts from industry and academia propose scanning on client devices such as smartphones due to the impending rollout of end-to-end encryption that will make server-side scanning difficult. These proposals have met with strong criticism because of the potential for the technology to be misused for censorship. However, the risks of this technology in the context of surveillance are not well understood. This talk will discuss results from our experimental investigation of physical surveillance risks in these systems. Concretely: (1) we offer a definition of physical surveillance in the context of client-side image scanning systems; (2) we experimentally characterize this risk; (3) we experimentally study the trade-off between the robustness of client-side image scanning systems and surveillance, showing that more robust detection of illicit material leads to an increased potential for physical surveillance in most settings.
Modern computer systems, encompassing mobile, cyber-physical, and cloud applications, are evolving to become more interconnected and complex. These systems facilitate diverse domain interactions, which in turn increase their vulnerability and present new challenges in security. Consequently, there is a critical need to assess and manage the security and attack surfaces of modern computer systems. This task demands scalable and reliable approaches to cope with the volatility of these ecosystems, highlighting the need for principled security solutions. In this talk, I will present how novel program analysis techniques, combined with security principles, can be leveraged to manage and reduce attack surfaces. I will present LMCAS, a software debloating approach that customizes applications based on runtime configurations and eliminates superfluous code, which preserving the required functionality. I will conclude by discussing future research directions that I am eager to explore.
It has been five years since the General Data Protection Regulation (GDPR) went into effect in the EU. Ever since, research has continued to show that the creators of online services find it difficult to implement the legal requirements of EU legislation into practice. They mainly resort to lengthy privacy policies and often deceptive cookie notices to ask users for their consent to data processing, rather than revise their own data processing practices and opt for approaches that collect less personal data. This comes to the detriment of service providers and users, who are both faced with decreased usability of websites, apps, and devices.
This talk investigates approaches to both understand the roadblocks that keep system creators and users from adopting a privacy-by-design mindset and to find ways to address them. This is ever more important in the light of new European platform regulations that intend to create boundaries for personalized advertising and introduce interoperability requirements, which in turn pose new opportunities to empower system creators and users alike to take control of users' privacy.
The transformative potential of deep learning in enhancing computer science solutions has not gone unnoticed in the fields of security and privacy. However, the sheer volume of related scientific literature and the significant gap between a lab context and real-world environments make it extremely challenging to assess the current progress in the area. In this talk, I will review underlying mechanisms and main principles behind deep learning when applied to offensive and defensive cybersecurity solutions. I will focus on two primary use cases: traffic analysis attacks on Tor and network-based intrusion detection systems, analyzing the expected benefits and potential pitfalls of using deep learning. This analysis effectively challenges the common perception of a purely end-to-end approach. To that end, the presentation emphasizes the importance of explainability and error analysis for validating and troubleshooting deep neural networks. This discussion is meant to equip cybersecurity researchers and practitioners to begin incorporating deep learning in their toolbox while maintaining a critical and holistic perspective.
Microarchitectural optimizations, such as caches, or speculative out-of-order execution, play a crucial role for enhancing system performance. However, these optimizations also enable attacks that undermine software-enforced security policies. The conventional approach of constant-time programming, while widely adopted for safeguarding cryptographic implementations against microarchitectural attacks, has its limitations. From a security perspective, it relies on certain assumptions about the underlying hardware and, for instance, does not suffice to protect against Spectre attacks. In terms of performance, it imposes an additional overhead due to, among other things, control-flow linearization.
In this presentation, we introduce two novel hardware-software co-design solutions to address some of the shortcomings of constant-time programming. First, we present ProSpeCT, a generic formal processor model that guarantees that constant-time programs (under a non-speculative semantics) are free from Spectre attacks, while still enabling speculative out-of-order execution. Second, Architectural Mimicry, a novel ISA extension that provides dedicated hardware support for efficient control-flow balancing and linearization of secret-dependent branches. Both defenses have been implemented and evaluated on top of Proteus, an extensible RISC-V processor. To conclude, we will discuss some of the remaining challenges that still need to be addressed to achieve provable end-to-end security guarantees.
In the Internet of Things era, edge devices have been considerably diversified and are often designed using high level synthesis (HLS), which translates behavioral descriptions into hardware descriptions, for improved design productivity. However, HLS tools were originally developed in a security-unaware manner, resulting in vulnerabilities to side-channel attacks. In our recent work that was published at ACM Trans. on Embedded Computing Systems, we integrated side-channel security in the state-of-the-art high-level-synthesis-based hardware design flow, leveraging a provably countermeasure called the threshold implementation. As case studies for lightweight block ciphers composed of addition/rotation/XOR (ARX)-based permutations, our evaluation using an FPGA board demonstrated that our proposed method can successfully improve the side-channel security for all ARX-based ciphers used as benchmarks.
Smart contracts are distributed applications, which govern financial assets in cryptocurrencies such as Ethereum and, hence, can implement advanced financial instruments, such as decentralized exchanges or autonomous organizations (DAOs). In this talk, I will overview our advances in developing static analyzers that come with provable guarantees. I will first show how to define generic security properties that characterize the absence of prominent attack classes (such as reentrancy or transaction order dependence) and then discuss different forms of static analysis that can prove these properties and that can be efficiently implemented to analyze real-world smart contracts.