论文标题

随机延迟反馈下的贝叶斯优化

Bayesian Optimization under Stochastic Delayed Feedback

论文作者

Verma, Arun, Dai, Zhongxiang, Low, Bryan Kian Hsiang

论文摘要

贝叶斯优化(BO)是一种广泛使用的顺序方法,用于对复杂和昂贵计算的黑盒功能进行零阶优化。现有的BO方法假设功能评估(反馈)可以立即或固定延迟后使用。在许多现实生活中的问题(例如在线建议,临床试验和高参数调谐)中,此类假设可能是不切实际的,在随机延迟后可以提供反馈。为了从这些问题中的实验并行化中受益,学习者需要开始新的功能评估,而无需等待延迟反馈。在本文中,我们认为BO在随机延迟反馈问题下。我们提出了带有子线性遗憾的算法,可以确保有效解决新功能查询的困境,同时等待随机延迟的反馈。在我们的结果的基础上,我们还为批处理和上下文高斯工艺匪徒做出了新的贡献。关于合成和现实生活数据集的实验验证了我们的算法的性能。

Bayesian optimization (BO) is a widely-used sequential method for zeroth-order optimization of complex and expensive-to-compute black-box functions. The existing BO methods assume that the function evaluation (feedback) is available to the learner immediately or after a fixed delay. Such assumptions may not be practical in many real-life problems like online recommendations, clinical trials, and hyperparameter tuning where feedback is available after a random delay. To benefit from the experimental parallelization in these problems, the learner needs to start new function evaluations without waiting for delayed feedback. In this paper, we consider the BO under stochastic delayed feedback problem. We propose algorithms with sub-linear regret guarantees that efficiently address the dilemma of selecting new function queries while waiting for randomly delayed feedback. Building on our results, we also make novel contributions to batch BO and contextual Gaussian process bandits. Experiments on synthetic and real-life datasets verify the performance of our algorithms.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源