论文标题
扩展通用近似保证:对现实世界学习任务连续性的理论理由
Extending Universal Approximation Guarantees: A Theoretical Justification for the Continuity of Real-World Learning Tasks
论文作者
论文摘要
通用近似定理在$ c(k,\ mathbb {r}^m)$中建立了各种类别的神经网络函数近似值的密度,其中$ k \ subset \ mathbb {r}^n $是紧凑的。在本文中,我们旨在通过建立学习任务的条件来确保其连续性的条件来扩展这些保证。我们考虑通过有条件期望给出的学习任务$ x \ mapsto \ mathrm {e} \ left [y \ mid x = x \ right] $,其中学习目标$ y = f \ circ l $是某些基本基础数据生成过程$ l $的潜在病理性转变。在分解$ l = t \ circ w $的数据生成过程中,$ t $被认为是在某些随机输入$ w $上作用的确定性图,我们确定条件(可以使用$ t $单独的知识来验证),以确保实际上\ textit {Any}衍生的学习任务$ x \ x f f textit的连续性, x \ right] $。我们使用随机稳定匹配的示例来激发条件的现实主义,从而为现实世界学习任务的连续性提供了理论上的理由。
Universal Approximation Theorems establish the density of various classes of neural network function approximators in $C(K, \mathbb{R}^m)$, where $K \subset \mathbb{R}^n$ is compact. In this paper, we aim to extend these guarantees by establishing conditions on learning tasks that guarantee their continuity. We consider learning tasks given by conditional expectations $x \mapsto \mathrm{E}\left[Y \mid X = x\right]$, where the learning target $Y = f \circ L$ is a potentially pathological transformation of some underlying data-generating process $L$. Under a factorization $L = T \circ W$ for the data-generating process where $T$ is thought of as a deterministic map acting on some random input $W$, we establish conditions (that might be easily verified using knowledge of $T$ alone) that guarantee the continuity of practically \textit{any} derived learning task $x \mapsto \mathrm{E}\left[f \circ L \mid X = x\right]$. We motivate the realism of our conditions using the example of randomized stable matching, thus providing a theoretical justification for the continuity of real-world learning tasks.