论文标题

学习理论用于估计动物运动子曼群

Learning Theory for Estimation of Animal Motion Submanifolds

论文作者

Powell, Nathan, Kurdila, Andrew

论文摘要

本文描述了一种新方法的制定和实验测试,以估计和近似动物运动模型。 It is assumed that the animal motion is supported on a configuration manifold $Q$ that is a smooth, connected, regularly embedded Riemannian submanifold of Euclidean space $X\approx \mathbb{R}^d$ for some $d>0$, and that the manifold $Q$ is homeomorphic to a known smooth, Riemannian manifold $S$.通过找到未知的映射$γ:s \ rightarrow q \ subset x $,将歧管$ s $映射到$ q $来实现歧管的估计。总体问题是在测量的多种情况下施放为无分配学习问题$ \ mathbb {z} = s \ times x $。也就是说,假定实验会生成有限集$ \ {(s_i,x_i)\} _ {i = 1}^m \ subset \ subset \ mathbb {z}^m $,这些样品是根据未知概率密度$ $ \ m mathbb in $ \ m m mathbb的未知概率密度$ $ $ \ mathbb} $ {z} $ {本文得出了基于$ m $样本的近似值$γ_{n,m} $的$γ$,并包含在近似值的$ n(n)$维空间中。本文定义了足够的条件,表明$ l^2_μ(s)$中的收敛速率对应于欧几里得空间上无经典的学习理论的融合速率。具体而言,该论文得出了足够的条件,可以保证具有$$ \ mathbb {e} \ left的形式的收敛速率(\ | | | |γ_μ^j-γ_{ \ frac {n(n)\ log(n(n))} {m} $$用于常量$ C_1,C_2 $,带有$γ_μ:= \ {γ^1_μ,\ ldots,γ^d_μ\} $γ_{n,m}:= \ {γ^1_ {n,j},\ ldots,γ^d_ {n,m} \} $。

This paper describes the formulation and experimental testing of a novel method for the estimation and approximation of submanifold models of animal motion. It is assumed that the animal motion is supported on a configuration manifold $Q$ that is a smooth, connected, regularly embedded Riemannian submanifold of Euclidean space $X\approx \mathbb{R}^d$ for some $d>0$, and that the manifold $Q$ is homeomorphic to a known smooth, Riemannian manifold $S$. Estimation of the manifold is achieved by finding an unknown mapping $γ:S\rightarrow Q\subset X$ that maps the manifold $S$ into $Q$. The overall problem is cast as a distribution-free learning problem over the manifold of measurements $\mathbb{Z}=S\times X$. That is, it is assumed that experiments generate a finite sets $\{(s_i,x_i)\}_{i=1}^m\subset \mathbb{Z}^m$ of samples that are generated according to an unknown probability density $μ$ on $\mathbb{Z}$. This paper derives approximations $γ_{n,m}$ of $γ$ that are based on the $m$ samples and are contained in an $N(n)$ dimensional space of approximants. The paper defines sufficient conditions that shows that the rates of convergence in $L^2_μ(S)$ correspond to those known for classical distribution-free learning theory over Euclidean space. Specifically, the paper derives sufficient conditions that guarantee rates of convergence that have the form $$\mathbb{E} \left (\|γ_μ^j-γ_{n,m}^j\|_{L^2_μ(S)}^2\right )\leq C_1 N(n)^{-r} + C_2 \frac{N(n)\log(N(n))}{m}$$for constants $C_1,C_2$ with $γ_μ:=\{γ^1_μ,\ldots,γ^d_μ\}$ the regressor function $γ_μ:S\rightarrow Q\subset X$ and $γ_{n,m}:=\{γ^1_{n,j},\ldots,γ^d_{n,m}\}$.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源