论文标题

依赖国家依赖的马尔可夫链合成,并应用了群

Decentralized State-Dependent Markov Chain Synthesis with an Application to Swarm Guidance

论文作者

Uzun, Samet, Ure, Nazim Kemal, Acikmese, Behcet

论文摘要

本文引入了有限状态马尔可夫链的分散状态依赖性马尔可夫链合成(DSMC)算法。我们提出了一个依赖于州的共识协议,该方案在轻度技术条件下实现了指数融合,而不依赖于动态网络拓扑的任何连接性假设。利用提出的共识协议,我们开发了DSMC算法,根据当前状态更新Markov矩阵,同时确保共识协议的收敛条件。该结果确定了所得马尔可夫链的所需稳态分布,从而确保了从所有初始分布的指数收敛,同时粘附到过渡约束并最小化状态过渡。 DSMC的性能通过概率的群指导示例来证明,该示例解释了包含大量移动药物作为概率分布的群体的空间分布,并利用马尔可夫链来计算状态之间的过渡概率。模拟结果表明,与先前基于马尔可夫链的群体指导算法相比,基于DSMC的算法的收敛速度更快。

This paper introduces a decentralized state-dependent Markov chain synthesis (DSMC) algorithm for finite-state Markov chains. We present a state-dependent consensus protocol that achieves exponential convergence under mild technical conditions, without relying on any connectivity assumptions regarding the dynamic network topology. Utilizing the proposed consensus protocol, we develop the DSMC algorithm, updating the Markov matrix based on the current state while ensuring the convergence conditions of the consensus protocol. This result establishes the desired steady-state distribution for the resulting Markov chain, ensuring exponential convergence from all initial distributions while adhering to transition constraints and minimizing state transitions. The DSMC's performance is demonstrated through a probabilistic swarm guidance example, which interprets the spatial distribution of a swarm comprising a large number of mobile agents as a probability distribution and utilizes the Markov chain to compute transition probabilities between states. Simulation results demonstrate faster convergence for the DSMC based algorithm when compared to the previous Markov chain based swarm guidance algorithms.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源