论文标题

适应零和不完美的信息游戏中的游戏树

Adapting to game trees in zero-sum imperfect information games

论文作者

Fiegel, Côme, Ménard, Pierre, Kozuno, Tadashi, Munos, Rémi, Perchet, Vianney, Valko, Michal

论文摘要

不完美的信息游戏(IIG)是每个玩家仅部分观察当前游戏状态的游戏。我们研究如何通过轨迹反馈来学习零和IIG中的$ε$ - 最佳策略。我们给出了一个无关的下限$ \ wideTilde {\ Mathcal {o}}(h(a _ {_ {\ Mathcal {x}}}+b _ {\ Mathcal {y}}}}}}}}}}}})/ε^2)$在$概率上学习这些策略的数量,在$上是$ H $的长度, $ a _ {\ mathcal {x}} $和$ b _ {\ Mathcal {y}} $是两个玩家的操作总数。我们还建议两个遵循此设置的正规领导者(FTRL)算法:平衡的FTRL匹配此下限,但要求事先了解信息集结构以定义正则化;和需要$ \ widetilde {\ Mathcal {o}}(h^2(A _ {\ Mathcal {X}}}+B _ {\ Mathcal {Y}}})/〜ph)的自适应ftrl。

Imperfect information games (IIG) are games in which each player only partially observes the current game state. We study how to learn $ε$-optimal strategies in a zero-sum IIG through self-play with trajectory feedback. We give a problem-independent lower bound $\widetilde{\mathcal{O}}(H(A_{\mathcal{X}}+B_{\mathcal{Y}})/ε^2)$ on the required number of realizations to learn these strategies with high probability, where $H$ is the length of the game, $A_{\mathcal{X}}$ and $B_{\mathcal{Y}}$ are the total number of actions for the two players. We also propose two Follow the Regularized leader (FTRL) algorithms for this setting: Balanced FTRL which matches this lower bound, but requires the knowledge of the information set structure beforehand to define the regularization; and Adaptive FTRL which needs $\widetilde{\mathcal{O}}(H^2(A_{\mathcal{X}}+B_{\mathcal{Y}})/ε^2)$ realizations without this requirement by progressively adapting the regularization to the observations.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源