论文标题

普罗克斯:通过原型解释加固学习剂

ProtoX: Explaining a Reinforcement Learning Agent via Prototyping

论文作者

Ragodos, Ronilo J., Wang, Tong, Lin, Qihang, Zhou, Xun

论文摘要

尽管已证明深入的强化学习在解决控制任务方面取得了成功,但代理商的“黑盒”性质已越来越关注。我们提出了一个基于原型的事后政策解释器Protox,该解释者通过将代理的行为原型化为场景来解释Blackbox代理,每种行为都由原型状态表示。学习原型时,Protox考虑了视觉相似性和场景相似性。后者是强化学习环境独有的,因为它解释了为什么在视觉上不同的状态下采取相同的动作。为了传达正面的视觉相似性,我们通过自我监督学习的对比度学习预先培训编码器,以识别状态在及时近距离发生并从黑框代理那里收到相同的动作,以识别状态。然后,我们添加一个等轴测层,以使Protox适应下游任务的方案相似性。通过使用行为克隆进行模仿学习对Protox进行训练,因此不需要访问环境或代理。除了解释忠诚外,我们还设计了目标函数中的不同原型塑造术语,以鼓励更好的解释性。我们进行了各种实验以测试基本。结果表明,Protox在提供有意义且易于理解的解释的同时,对原始的黑盒代理实现了高保真度。

While deep reinforcement learning has proven to be successful in solving control tasks, the "black-box" nature of an agent has received increasing concerns. We propose a prototype-based post-hoc policy explainer, ProtoX, that explains a blackbox agent by prototyping the agent's behaviors into scenarios, each represented by a prototypical state. When learning prototypes, ProtoX considers both visual similarity and scenario similarity. The latter is unique to the reinforcement learning context, since it explains why the same action is taken in visually different states. To teach ProtoX about visual similarity, we pre-train an encoder using contrastive learning via self-supervised learning to recognize states as similar if they occur close together in time and receive the same action from the black-box agent. We then add an isometry layer to allow ProtoX to adapt scenario similarity to the downstream task. ProtoX is trained via imitation learning using behavior cloning, and thus requires no access to the environment or agent. In addition to explanation fidelity, we design different prototype shaping terms in the objective function to encourage better interpretability. We conduct various experiments to test ProtoX. Results show that ProtoX achieved high fidelity to the original black-box agent while providing meaningful and understandable explanations.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源