论文标题
多维函数的拜占庭弹性分布式优化
Byzantine-Resilient Distributed Optimization of Multi-Dimensional Functions
论文作者
论文摘要
分布式优化的问题要求一组代理使用从其邻居收到的信息最大程度地降低其本地成本功能的平均值的参数。尽管有多种可以解决此问题的分布式优化算法,但它们通常容易受到不遵循算法的恶意(或``byzantine')代理的攻击。解决此问题的最新尝试集中在单一维功能上,或者在某些假设下对代理商在函数的统计特性进行分析。在本文中,我们提出了一种用于多维凸功能的弹性分布式优化算法。我们的方案涉及算法的每次迭代时的两个过滤步骤:(1)基于距离的和(2)组件的删除极端状态。我们表明,该算法可以减轻拜占庭特工在每个常规节点附近的影响,而不必事先知道拜占庭特工的身份。特别是,我们表明,如果网络拓扑满足某些条件,则保证所有常规状态均非渐近地收敛到包含全球最小化器的有限区域。
The problem of distributed optimization requires a group of agents to reach agreement on a parameter that minimizes the average of their local cost functions using information received from their neighbors. While there are a variety of distributed optimization algorithms that can solve this problem, they are typically vulnerable to malicious (or ``Byzantine'') agents that do not follow the algorithm. Recent attempts to address this issue focus on single dimensional functions, or provide analysis under certain assumptions on the statistical properties of the functions at the agents. In this paper, we propose a resilient distributed optimization algorithm for multi-dimensional convex functions. Our scheme involves two filtering steps at each iteration of the algorithm: (1) distance-based and (2) component-wise removal of extreme states. We show that this algorithm can mitigate the impact of up to $F$ Byzantine agents in the neighborhood of each regular node, without knowing the identities of the Byzantine agents in advance. In particular, we show that if the network topology satisfies certain conditions, all of the regular states are guaranteed to asymptotically converge to a bounded region that contains the global minimizer.