The 2014 ACM Workshop on Artificial Intelligence and Security will be co-located with CCS - the premier computer security conference.
As the 7th workshop in the series, AISec 2014 calls for papers on topics related to both AI/learning and security/privacy.
Artificial Intelligence (AI), and Machine Learning (ML) in particular, provide a set of useful analytic and decision-making techniques that are being leveraged by an ever-growing community of AI and ML practitioners, including applications with security-sensitive elements. However, while security researchers often utilize such techniques to address problems and AI/ML researchers develop techniques for big-data analytics applications, neither community devotes enough attention to the other. Within security research, AI/ML components are often regarded as black-box solvers. Conversely, the learning community seldom considers the security/privacy implications entailed in the application of their algorithms when they designing them. While these two communities generally focus on different directions, where these two fields do meet, interesting problems appear. These have already raised many novel questions for both communities and created a new branch of research known as secure learning. Within this intersection, the AISec Workshop has become the primary venue for this unique fusion of research.
The past few years have particularly seen increasing interest within the AISec / Secure Learning community - first with a weeklong workshop at the Dagstuhl castle in Germany followed by the highly successful fifth AISec workshop. There are several reasons for this surge. Firstly, machine learning, data mining, and other artificial intelligence technologies play a key role in extracting knowledge, situational awareness, and security intelligence from Big Data. Secondly, companies like Google, Amazon and Splunk are increasingly exploring and deploying learning technologies to address Big Data problems for their customers. Finally, these trends are increasingly exposing companies and their customers/users to intelligent technologies. As a result, these learning technologies are both being explored by researchers as potential solutions to security/privacy problems, and also are being investigated as a potential source of new privacy/security vulnerabilities that need to be secured to prevent them from misbehaving or leaking information to an adversary. The AISec Workshop meets this need and serves as the sole long-running venue for this topic.
AISec serves as the primary meeting place for diverse researchers in security, privacy, AI and machine learning, and as a venue to develop the fundamental theory and practical applications supporting the use of machine learning for security and privacy. The needs of this burgeoning community who are especially focused on (among other topics) learning in game-theoretic adversarial environments, privacy-preserving learning, or use of sophisticated new learning algorithms in security is not met elsewhere.
In the past year there has been a surge in the use of Differential privacy. Differential privacy is a prominent concept in computer science theory, databases and machine learning communities, that formalises privacy guarantees in a rather general manner. It also has strong links with the truthfulness of mechanisms in multi-agent systems. We especially encourage papers connecting differential privacy to problems in artificial intelligence and learning.