论文标题

专家的私人在线预测:分离和更快的速度

Private Online Prediction from Experts: Separations and Faster Rates

论文作者

Asi, Hilal, Feldman, Vitaly, Koren, Tomer, Talwar, Kunal

论文摘要

专家的在线预测是机器学习中的一个基本问题,在隐私限制下,有几项工作研究了此问题。我们建议和分析此问题的新算法,以改善非自适应对手的最佳现有算法的遗憾界限。为了近似差异隐私,我们的算法实现了$ \ tilde {o}(\ sqrt {\ sqrt {t \ log d} + \ log d/\ varepsilon)$ for stochastic设置和$ \ tilde {o} d/\ varepsilon)$用于遗忘的对手(其中$ d $是专家的数量)。对于Pure DP,我们的算法是第一个在高维度$ d \ ge t $中对遗忘对手的遗感到遗憾的算法。此外,我们证明了自适应对手的新范围。我们的结果意味着与非私人环境不同,适应性和非自适应对手的最佳遗憾之间存在着强烈的分离。我们的下限还显示了自适应对手的纯与近似差分隐私之间的分离,在这些对手中,后者是实现非私人$ o(\ sqrt {t})$遗憾的必要条件。

Online prediction from experts is a fundamental problem in machine learning and several works have studied this problem under privacy constraints. We propose and analyze new algorithms for this problem that improve over the regret bounds of the best existing algorithms for non-adaptive adversaries. For approximate differential privacy, our algorithms achieve regret bounds of $\tilde{O}(\sqrt{T \log d} + \log d/\varepsilon)$ for the stochastic setting and $\tilde{O}(\sqrt{T \log d} + T^{1/3} \log d/\varepsilon)$ for oblivious adversaries (where $d$ is the number of experts). For pure DP, our algorithms are the first to obtain sub-linear regret for oblivious adversaries in the high-dimensional regime $d \ge T$. Moreover, we prove new lower bounds for adaptive adversaries. Our results imply that unlike the non-private setting, there is a strong separation between the optimal regret for adaptive and non-adaptive adversaries for this problem. Our lower bounds also show a separation between pure and approximate differential privacy for adaptive adversaries where the latter is necessary to achieve the non-private $O(\sqrt{T})$ regret.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源