论文标题

基于山脊分析的小组卷积神经网络的通用性

Universality of Group Convolutional Neural Networks Based on Ridgelet Analysis on Groups

论文作者

Sonoda, Sho, Ishikawa, Isao, Ikeda, Masahiro

论文摘要

我们以统一和建设性的方式基于Ridgelet理论来展示Depth-2组卷积神经网络(GCNN)的普遍性。尽管在应用中广泛使用,但(G)CNN的近似属性尚未得到很好的研究。自2010年代末以来,已经显示了(G)CNN的普遍性。然而,我们对(g)CNN表示函数的理解是不完整的,因为过去的普遍性定理是通过手动/仔细/仔细分配网络参数来显示的,这取决于卷积层的多样性,并通过间接的方式将(g)CNN转换为其他通用网络,例如sove novimiant novimiant connom and novimiant cons,例如cnnns and invariant connoms and novarsiant and novariant and novimnom-connom and infull-conn and and infull-conn and and infull-conn and and infull-connom and inful。在这项研究中,我们制定了一种多功能的DEPTH-2连续GCNN $ s [γ] $作为组表示之间的非线性映射,并直接获得一个称为Ridgelet Trasform的分析运算符,该操作员将给定函数$ f $映射到网络参数$γ$,以便$ s [γ] = f $。所提出的GCNN涵盖了典型的GCNN,例如多通道图像上的环状卷积,有关排列不变的输入(深度集)的网络以及$ \ MATHRM {e}(n)$ - ecorivariant网络。 Ridgelet变换的封闭形式表达可以描述网络参数的组织方式以表示函数。虽然仅以完全连接的网络而闻名,但这项研究是第一个获得GCNN的岭变换的研究。通过离散封闭形式的表达式,我们可以系统地生成有限GCNNS的$ cc $ - 宇宙性的建设性证明。换句话说,我们的普遍性证明比以前的证明更统一和建设性。

We show the universality of depth-2 group convolutional neural networks (GCNNs) in a unified and constructive manner based on the ridgelet theory. Despite widespread use in applications, the approximation property of (G)CNNs has not been well investigated. The universality of (G)CNNs has been shown since the late 2010s. Yet, our understanding on how (G)CNNs represent functions is incomplete because the past universality theorems have been shown in a case-by-case manner by manually/carefully assigning the network parameters depending on the variety of convolution layers, and in an indirect manner by converting/modifying the (G)CNNs into other universal approximators such as invariant polynomials and fully-connected networks. In this study, we formulate a versatile depth-2 continuous GCNN $S[γ]$ as a nonlinear mapping between group representations, and directly obtain an analysis operator, called the ridgelet trasform, that maps a given function $f$ to the network parameter $γ$ so that $S[γ]=f$. The proposed GCNN covers typical GCNNs such as the cyclic convolution on multi-channel images, networks on permutation-invariant inputs (Deep Sets), and $\mathrm{E}(n)$-equivariant networks. The closed-form expression of the ridgelet transform can describe how the network parameters are organized to represent a function. While it has been known only for fully-connected networks, this study is the first to obtain the ridgelet transform for GCNNs. By discretizing the closed-form expression, we can systematically generate a constructive proof of the $cc$-universality of finite GCNNs. In other words, our universality proofs are more unified and constructive than previous proofs.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源