This talk will be about ongoing work on developing new program
synthesis techniques. One of the applications is to find programs
that break type soundness, given a type system and a semantics. I
will show that some [challenges of the
IFC](https://ifc-challenge.appspot.com/) can be solved automatically
in this way.
We present our work on the suitability of the metaphors for aiding
informed decisions of data subjects on sharing their data with
differential privacy (DP) systems and discuss open research
challenges.
HTTPS is a cornerstone of privacy in the modern Web. The public key
infrastructure underlying HTTPS, however, is a frequent target of
attacks. We introduce LogPicker, a novel protocol for strengthening
the public key infrastructure of HTTPS. LogPicker enables a pool of
Certificate Transparency (CT) logs to collaborate, where a randomly
selected log includes the certificate while the rest witness and
testify the certificate issuance process. As a result, CT logs
become capable of auditing the log in charge independently without
the need for a trusted third party.
Computer systems have evolved beyond classical notions of personal
computers, servers and even smartphones. They are distributed,
embedded, capable of learning and can modify our perception of the
physical world. Securing such systems requires an end-to-end
perspective. I will demonstrate the utility of this perspective by
discussing my recent results on: (1) building least-privilege
distributed systems with applications to the Internet of Things; and
(2) establishing threat models for systems that learn.
Traffic analysis for instant messaging (IM) applications continues to pose an important privacy challenge. In particular, transport-level data can leak unintentional information about IM – such as who communicates with whom. Existing tools for metadata privacy have adoption obstacles, including the risks of being scrutinized for having a particular app installed, and performance overheads incompatible with mobile devices.
Common verification procedures for digital signatures return a
decision (accept/reject) only at the very end of the execution. If
interrupted prematurely, however, the verification process cannot
infer any meaningful information about the validity of the given
signature. This limitation is due to the algorithm design solely,
and it is not inherit to signature verification. In this talk, I
will present a formal framework to handle interruptions during
signature verification and a generic way to devise alternative
verification procedures that progressively build confidence on the
final decision. Our transformation applies to a wide range of
post-quantum secure schemes including the NIST finalist Rainbow.
In this work, we lift two technical assumptions that stand out among the details of Vassena et al.’s translation. First, the security property for which the equivalence is formally established is termination-insensitive noninterference. Second, both coarse and fine-grained languages are statically typed in a standard (security unaware) way. We derive a novel 2-labeled fine-grained dynamic IFC system, for which we have not found a semantics-preserving approach that is idiomatically coarse-grained. We conjecture that 2-labeled (and generally n-labeled) fine grained monitors unveil an expressiveness gap between the fine-grained and the coarse-grained approaches to dynamic IFC.
Modern commodity computing platforms such as smartphones (e.g., Android and iOS) and smart home systems (e.g., SmartThings and NEST) provide programmable interfaces for third-party integration, enabling popular third-party functionality that is often manifested in applications, or apps. Thus, for the last decade, designing systems to analyze mobile apps for vulnerabilities or unwanted behavior has been a major research focus within the security community. Leveraging the lessons and techniques learned from mobile app analysis, researchers have developed similar systems to evaluate the security, safety, and privacy of smart homes by inspecting IoT apps developed for platforms such as SmartThings. However, emerging characteristics of smart home ecosystems indicate the need to move away from the approach of IoT app analysis, as IoT apps may not be representative of the home automation in real homes, and moreover, be unavailable for analysis or instrumentation in the near future.
How can we write secure programs in a pervasively effectful language? In a “pure” language, such as Haskell, effects performed by a program are recorded explicitly in its type. Thus, a function of type Int - Int is just that: a function that receives an integer and returns an integer. It does not perform side effects such as writing to or reading from a channel. In an impure language, such as ML, however, a function of type Int - Int may read, write, or even order a burrito. It’s impossible to assert that a function is secure from its type alone, since it may be performing invisible side effects that may leak a secret.
Finding suitable ways to handle personal data in conformance with the law is challenging. For existing systems the challenge is to be able to show evidence that they are already complying with the GDPR, or otherwise to work towards compliance by modifying their systems and procedures, or alternatively reprogramming their systems in order to pass the eventual controls. In this short non-technical talk I will give my personal opinion on issues related to the ambition of achieving Privacy by Construction.