LAB at NIPS 2016

We are delighted to inform that the LAB group will contribute to three different workshops and one symposium at this years Neural Information Processing Systems conference in Barcelona. The conference is one of the highlights of the machine learning year and takes place between the 5th and 10th of December, 2016.

Olof Mogren will present his work C-RNN-GAN: Continuous recurrent neural networks with adversarial training at the Constructive Machine Learning Workshop.

Mikael Kågebäck and Emilio Jorge will present their work Learning to Play Guess Who? and Inventing a Grounded Language as a Consequence at the Deep Reinforcement Learning Workshop.

Fredrik Johansson will present his, Uri Shalit and David Sontag’s work on estimating individual treatment effects at the What If? Workshop. Uri Shalit will also present their work at the Deep Learning Symposium.

Vacancy: Associate Professor in Machine Learning

There is currently a vacant position at the department of Computer Science and Engineering.

Associate Professor in Machine Learning

Major responsibilities
The position of Associate Professor is a full-time faculty position. You will teach courses at the bachelor and masters level. You will also lead research activities and supervise PhD students and master students. You are expected to be an active researcher and to prepare applications for the funding of research projects, as well as to promote the continuation of ongoing projects.

Position summary
Full-time permanent position.

We are looking for applicants from all subfields of machine learning with a PhD in computer science (or an equivalent qualification). Applicants with some of the following qualifications are especially appreciated:

* a strong research record;
* a strong pedagogical track record;
* potential to lead new research projects and teaching methods
* core competence in machine learning;
* a record of implementing scalable methods and systems;
* a record of applications in some domain;
* openness to industry collaboration.

The position of associate professor requires scientific expertise that is considerably higher than that required for a doctoral degree (corresponding to admission as docent) and at least 15 higher education credits in teaching and learning in higher education or corresponding expertise. You must be accustomed to teaching and be a skilled educator. For this reason, we place a great deal of emphasis on your pedagogical portfolio. Experience in conducting academic research and/or development in industry/the public sector is a requirement.

Workshop on Deep Learning in Academia and Industry in Gothenburg

Venue: Department of Computer Science and Engg. Chalmers, room 8103.
Date: Dec, 3 2016
10-1030: M. Kageback, E. Jorge and E. Gustavsson: Learning to communicate
10:30-11: Richard Johansson and Luis Pena Neto: Word senses and ontologies
11-1130: Olof Mogren: Continuous recurrent neural networks with adversarial training
1130-12: discussion
12-13: Lunch + coffee
13- 1345: Richard Socher: Tackling the problems of deep learning
14-1430: Fredrik Johansson: Learning to generate sequences
1430-15: Daniel Langkilde: Extracting information from the Web at Recorded Future
15-1530: S. Marinov: NLP and the Law

More success in top-tiers conferences with AAAI-17

Our group has enjoyed two more successes with differential privacy and multi-armed bandits algorithms. The first extends Thompson sampling to stochastic bandits with feedback from a graph and the second improves the privacy of adversarial bandits with two simple heuristic.

Both work have been accepted for publication at the thirty-first AAAI Conference on Artificial Intelligence (AAAI-17) to be held in San Francisco on February 2017.

Here are the details:

  1. Thompson Sampling For Stochastic Bandits with Graph Feedback
    Aristide Tossou, Christos Dimitrakakis and Devdatt Dubhashi

    We present a novel extension of Thompson Sampling for stochastic sequential decision problems with graph feedback, even when the graph structure itself is unknown and/or changing. We provide theoretical guarantees on the Bayesian regret of the algorithm, linking its performance to the underlying properties of the graph. Thompson Sampling has the advantage of being applicable without the need to construct complicated upper confidence bounds for different problems. We illustrate its performance through extensive experimental results on real and simulated networks with graph feedback. More specifically, we tested our algorithms on power law, planted partitions and Erdos–Renyi graphs, as well as on graphs derived from Facebook and Flixster data. These all show that our algorithms clearly outperform related methods that employ upper confidence bounds, even if the latter use more information about the graph.

  2. Achieving Privacy in the Adversarial Multi-Armed Bandit
    Aristide Tossou and Christos Dimitrakakis

    In this paper, we improve the previously best known regret bound to achieve ε-differential privacy in oblivious adversarial bandits from O(T2/3/ε) to O(T1/2ln T/ε). This is achieved by combining a Laplace Mechanism with the standard EXP3. In addition, we prove that EXP3 is already differentially private, but it leaks a linear amount of information in T. However, we can improve the inherent privacy of EXP3 simply by relying on its intrinsic exponential mechanism for selecting actions. This allows us to reach O(ln T)-DP, with a regret
    of O(T2/3) that holds against an adaptive adversary, an improvement from the best known of O(T3/4). This is done by using an algorithm that wraps EXP3 with a mini-batch loop. Finally, we run experiments that clearly demonstrate the validity of our theoretical analysis.

A link to download the papers will be made available soon.

Deep Learning and NLP

The LAB group have seen several successes recently in deep learning and natural language processing. Olof Mogren’s work on named entity recognition using character-based LSTMs was recently accepted for publication at the Biomedical Text Mining workshop at Coling, BIOTXTM. Soon after, a paper on word sense disambiguation using LSTMs by Mikael Kågebäck and Hans Salomonsson was accepted to the workshop Cognitive Aspects of the Lexicon, CogALex. Mikael Kågebäck and Emilio Jorge’s work on inventing and learning languages by playing games will be presented at a NIPS workshop. Additionally, the work on causality by Fredrik Johansson and collaborators will be featured on the Deep Learning Symposium at NIPS 2016.

VR Project: Learning, Privacy and the Limits of Computation

Christos Dimitrakakis just obtained a Swedish Research Council Project Grant on Learning, Privacy and the Limits of Computation that will fund one PhD student.

The project will explore the interactions between information theory, algorithm complexity and privacy in the context of statistiacl learning and decision making algorithms.

Students with a strong mathematical background and an interest in learning theory are encouraged to apply.

AAAI 2016

Two of our papers were accepted for presentation at AAAI 2016.

1.  Aristide Tossou and Christos Dimitrakakis “Algorithms for differentially private multi-armed bandits”.

2. With our colleagues at the university of Melbourne: Benjamin Rubinstein, Zuhe Zhang and Christos Dimitrakakis “On the Differential Privacy of Bayesian Inference”.

More details soon.

Publications in top-tier conferences

We are happy to report that our group has seen recent success through papers accepted at two top-tier conferences.

Chien-Chung Huang, Naonori Kakimura and Naoyuki Kamiyama will have their paper Exact and Approximation Algorithms for Weighted Matroid Intersection published in the very competitive SODA 2016.

A few weeks earlier, Fredrik Johansson, Ankani Chattoraj, Devdatt Dubhashi and Chiranjib Bhattacharyya had their paper Weighted Theta Functions and Embeddings with Applications to Max-Cut, Clustering and Summarization accepted for publication at NIPS 2015.


Exact and Approximation Algorithms for Weighted Matroid Intersection
Chien-Chung Huang, Naonori Kakimura and Naoyuki Kamiyama

We present exact and approximation algorithms for the weighted matroid intersection problems. Our exact algorithms are faster than previous algorithms when the largest weight is relatively small. Our approximation algorithms deliver a (1-∈)-approximate solution with running times significantly faster than known exact algorithms. The core of our algorithms is a decomposition technique: we decompose the weighted version of the problem into a set of unweighted matroid intersection problems. The computational advantage of this approach is that we can then make use of fast unweighted matroid intersection algorithms as a black box for designing algorithms. To be precise, we show that we can find an exact solution via solving W unweighted matroid intersections problems, where W is the largest given weight. Furthermore, we can find a (1-∈)-approximate solution via solving O(∈^{-1} log r) unweighted matroid intersection problems, where r is the smallest rank of the given two matroids.


Weighted Theta Functions and Embeddings with Applications to Max-Cut, Clustering and Summarization
Fredrik Johansson, Ankani Chattoraj, Devdatt Dubhashi and Chiranjib

We introduce a unifying generalization of the Lovász theta function, and the associated geometric embedding, for graphs with weights on both nodes and edges. We show how it can be computed exactly by semidefinite programming, and how to approximate it using SVM computations. We show how the theta function can be interpreted as a measure of diversity in graphs and use this idea, and the graph embedding in algorithms for Max-Cut, correlation clustering and document summarization, all of which are well represented as problems on weighted graphs.