论文标题
将复杂/超复合卷积概括到矢量图卷积
Generalizing Complex/Hyper-complex Convolutions to Vector Map Convolutions
论文作者
论文摘要
我们表明,复杂和超复合有价值的神经网络对其实际评估对应物提供了改进的核心原因是权重共享机制和将多维数据视为一个实体。他们的代数线性结合了尺寸,使每个维度与其他维度相关。但是,两者都限制为设定的尺寸,两个尺寸为两个,四个尺寸为四个尺寸。在这里,我们介绍了新颖的矢量图卷积,这些卷积捕获了复杂/超复杂卷积提供的这两种属性,同时删除了它们施加的不自然的维度约束。这是通过引入一个模拟输入尺寸的独特线性组合的系统(例如汉密尔顿的四元素产品)来实现的。我们执行三个实验,以表明这些新颖的矢量图卷积似乎捕获了复杂和超复杂网络的所有好处,例如它们捕获内部潜在关系的能力,同时避免了维度限制。
We show that the core reasons that complex and hypercomplex valued neural networks offer improvements over their real-valued counterparts is the weight sharing mechanism and treating multidimensional data as a single entity. Their algebra linearly combines the dimensions, making each dimension related to the others. However, both are constrained to a set number of dimensions, two for complex and four for quaternions. Here we introduce novel vector map convolutions which capture both of these properties provided by complex/hypercomplex convolutions, while dropping the unnatural dimensionality constraints they impose. This is achieved by introducing a system that mimics the unique linear combination of input dimensions, such as the Hamilton product for quaternions. We perform three experiments to show that these novel vector map convolutions seem to capture all the benefits of complex and hyper-complex networks, such as their ability to capture internal latent relations, while avoiding the dimensionality restriction.