论文标题

Neuroview-RNN:是时候了

NeuroView-RNN: It's About Time

论文作者

Barberan, CJ, Alemohammad, Sina, Liu, Naiming, Balestriero, Randall, Baraniuk, Richard G.

论文摘要

复发性神经网络(RNN)是处理顺序数据(例如时间序列或视频)的重要工具。解释性被定义为一个人理解的能力,并且与解释性不同,这是在数学表述中解释的能力。 RNNS的一个关键解释性问题是,每个时间步骤的每个隐藏状态如何以定量方式有助于决策过程。我们建议Neuroview-RNN作为一个新的RNN体系结构家庭,解释了如何将所有时间步骤用于决策过程。该家庭的每个成员都是通过将隐藏步骤串联到全球线性分类器中的标准RNN体系结构中得出的。全局线性分类器具有所有隐藏状态作为输入,因此分类器的权重具有向隐藏状态的线性映射。因此,从权重来看,Neuroview-RNN可以量​​化每个时间步骤对特定决策的重要性。作为奖励,与RNN及其变体相比,Neuroview-RNN在许多情况下也提供了更高的准确性。我们通过评估多种不同的时间序列数据集来展示Neuroview-RNN的好处。

Recurrent Neural Networks (RNNs) are important tools for processing sequential data such as time-series or video. Interpretability is defined as the ability to be understood by a person and is different from explainability, which is the ability to be explained in a mathematical formulation. A key interpretability issue with RNNs is that it is not clear how each hidden state per time step contributes to the decision-making process in a quantitative manner. We propose NeuroView-RNN as a family of new RNN architectures that explains how all the time steps are used for the decision-making process. Each member of the family is derived from a standard RNN architecture by concatenation of the hidden steps into a global linear classifier. The global linear classifier has all the hidden states as the input, so the weights of the classifier have a linear mapping to the hidden states. Hence, from the weights, NeuroView-RNN can quantify how important each time step is to a particular decision. As a bonus, NeuroView-RNN also offers higher accuracy in many cases compared to the RNNs and their variants. We showcase the benefits of NeuroView-RNN by evaluating on a multitude of diverse time-series datasets.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源