1;2802;0c

ICML 2014 Workshop on Learning, Security and Privacy

Beijing, China, 25 June, 2014.

Program

9:00 Keynote: Martin Rehak, CISCO. Categorisation of False Positives: Not All Network Anomalies are Born Equal[Abstract]
9:55 Devansh Arpit, Ifeoma Nwogu, Gaurav Srivastava and Venu Govindaraju. An Analysis of Random Projections in Cancelable Biometrics [PDF]
10:20 Coffee Break
10:40 Christos Dimitrakakis, Blaine Nelson, Benjamin Rubinstein and Aikaterini Mitrokotsa. Robust and Private Bayesian Inference [Abstract] (presentation only)
11:05 Abhradeep Thakurta, Raef Bassily and Adam Smith. Private Empirical Risk Minimization, Revisited (presentation only)
11:30 Pili Hu, Wing Cheong Lau and Sherman S.M. Chow. Secure Friend Discovery via Privacy-Preserving and Decentralized Community Detection [PDF]
11:55 Lunch and poster session
14:00 Keynote: Xiaojin (Jerry) Zhu, University of Wisconsin-Madison. Optimal Training Set Attacks on Machine Learning
14:55 Nikita Mishra and Abhradeep ThakurtaPrivate Stochastic Multi-arm Bandits: From Theory to Practice [PDF]
15:20 Coffee Break
15:40 Elad Eban, Elad Mezuman and Amir Globerson. Discrete Chebyshev Classifiers [PDF]
16:05 Mohamad Ali Torkamani and Daniel Lowd. On Robustness and Regularization of Structural Support Vector Machines (presentation only)
16:30 Discussion

Workshop overview

Many machine learning settings give rise to security and privacy requirements which are not well-addressed by traditional learning methods. Security concerns arise in intrusion detection, malware analysis, biometric authentication, spam filtering, and other applications where data may be manipulated - either at the training stage or during the system deployment - to reduce prediction accuracy. Privacy issues are common to the analysis of personal and corporate data ubiquitous in modern Internet services. Learning methods addressing security and privacy issues face an interplay of game theory, cryptography, optimization and differential privacy.

Despite encouraging progress in recent years, many theoretical and practical challenges remain. Several emerging research areas, including stream mining, mobility data mining, and social network analysis, require new methodical approaches to ensure privacy and security. There is also an urgent need for methods that can quantify and enforce privacy and security guarantees for specific applications. The ever increasing abundance of data raises technical challenges to attain scalability of learning methods in security and privacy critical settings. These challenges can only be addressed in the interdisciplinary context, by pooling expertise from the traditionally disjoint fields of machine learning, security and privacy.

To encourage scientific dialogue and foster cross-fertilization among these three fields, the workshop invites original submissions, ranging from open problems and ongoing research to mature work, in any of the following core subjects:

Organizing committee

Program committee