Our group has enjoyed two more successes with differential privacy and multi-armed bandits algorithms. The first extends Thompson sampling to stochastic bandits with feedback from a graph and the second improves the privacy of adversarial bandits with two simple heuristic.
Both work have been accepted for publication at the thirty-first AAAI Conference on Artificial Intelligence (AAAI-17) to be held in San Francisco on February 2017.
Here are the details:
- Thompson Sampling For Stochastic Bandits with Graph Feedback
- Aristide Tossou, Christos Dimitrakakis and Devdatt Dubhashi
We present a novel extension of Thompson Sampling for stochastic sequential decision problems with graph feedback, even when the graph structure itself is unknown and/or changing. We provide theoretical guarantees on the Bayesian regret of the algorithm, linking its performance to the underlying properties of the graph. Thompson Sampling has the advantage of being applicable without the need to construct complicated upper confidence bounds for different problems. We illustrate its performance through extensive experimental results on real and simulated networks with graph feedback. More specifically, we tested our algorithms on power law, planted partitions and Erdos–Renyi graphs, as well as on graphs derived from Facebook and Flixster data. These all show that our algorithms clearly outperform related methods that employ upper confidence bounds, even if the latter use more information about the graph.
- Achieving Privacy in the Adversarial Multi-Armed Bandit
- Aristide Tossou and Christos Dimitrakakis
In this paper, we improve the previously best known regret bound to achieve ε-differential privacy in oblivious adversarial bandits from O(T2/3/ε) to O(T1/2ln T/ε). This is achieved ￼by combining a Laplace Mechanism with the standard EXP3. In addition, we prove that EXP3 is already differentially private, but it leaks a linear amount of information in T. However, we can improve the inherent privacy of EXP3 simply by relying on its intrinsic exponential mechanism for selecting actions. This allows us to reach O(ln T)-DP, with a regret
￼of O(T2/3) that holds against an adaptive adversary, an improvement from the best known of O(T3/4). This is done by using an algorithm that wraps EXP3 with a mini-batch loop. Finally, we run experiments that clearly demonstrate the validity of our theoretical analysis.
A link to download the papers will be made available soon.