论文标题
红色浅绿灯方法用于解决大型马尔可夫连锁店
Red Light Green Light Method for Solving Large Markov Chains
论文作者
论文摘要
离散时间离散状态有限马尔可夫链是多种现实生活随机过程的多功能数学模型。马尔可夫链研究中最常见的任务之一是固定分布的计算。在不失去一般性的情况下,我们将我们从应用程序到大型网络的动机将其解释为计算图表上随机步行的固定分布之一。我们为此任务提出了一种新的受控,易于分布的算法,简要概述如下:一开始,每个节点都会收到固定量的现金(正或负),在每种迭代时,某些节点都会收到“绿光”以分配其财富或与Markov Chain的过渡概率成比例地分配其财富或债务;计算节点的固定概率是该节点分配的现金与所有节点分配的总现金的比率。我们的方法包括特殊情况,包括广泛的已知,非常不同的和以前断开的方法,包括电源迭代,高斯 - 苏斯韦尔和在线分布式算法。我们证明了我们的方法的指数融合,证明其高效率并为绿灯提供了调度策略,该策略比最先进的算法更快地达到收敛速度。
Discrete-time discrete-state finite Markov chains are versatile mathematical models for a wide range of real-life stochastic processes. One of most common tasks in studies of Markov chains is computation of the stationary distribution. Without loss of generality, and drawing our motivation from applications to large networks, we interpret this problem as one of computing the stationary distribution of a random walk on a graph. We propose a new controlled, easily distributed algorithm for this task, briefly summarized as follows: at the beginning, each node receives a fixed amount of cash (positive or negative), and at each iteration, some nodes receive `green light' to distribute their wealth or debt proportionally to the transition probabilities of the Markov chain; the stationary probability of a node is computed as a ratio of the cash distributed by this node to the total cash distributed by all nodes together. Our method includes as special cases a wide range of known, very different, and previously disconnected methods including power iterations, Gauss-Southwell, and online distributed algorithms. We prove exponential convergence of our method, demonstrate its high efficiency, and derive scheduling strategies for the green-light, that achieve convergence rate faster than state-of-the-art algorithms.