论文标题
对抗估计器
Adversarial Estimators
论文作者
论文摘要
我们开发了对对抗估计量(“ a-估计器”)的渐近理论。它们将最大样品型估计量(“ M估计器”)推广为其平均目标,以通过某些参数最大化,而其他参数则最小化。该类别涵盖了瞬间的瞬间通用方法,生成的对抗网络以及机器学习和计量经济学方面的最新建议。在这些示例中,研究人员指出,该问题的哪些方面原则上可以用于估计,并且对手学习如何最佳地强调它们。我们得出在重点和部分识别下A-估计器的收敛速率,以及其参数功能的正态性。未知功能可以通过诸如深神经网络之类的筛子近似,为此我们提供了简化的低级条件。作为推论,我们获得了神经网络M估计剂的正态性,克服了文献先前确定的技术问题。我们的理论产生了有关各种A估计剂的新成果,为它们在最近的应用中的成功提供了直觉和正式的理由。
We develop an asymptotic theory of adversarial estimators ('A-estimators'). They generalize maximum-likelihood-type estimators ('M-estimators') as their average objective is maximized by some parameters and minimized by others. This class subsumes the continuous-updating Generalized Method of Moments, Generative Adversarial Networks and more recent proposals in machine learning and econometrics. In these examples, researchers state which aspects of the problem may in principle be used for estimation, and an adversary learns how to emphasize them optimally. We derive the convergence rates of A-estimators under pointwise and partial identification, and the normality of functionals of their parameters. Unknown functions may be approximated via sieves such as deep neural networks, for which we provide simplified low-level conditions. As a corollary, we obtain the normality of neural-net M-estimators, overcoming technical issues previously identified by the literature. Our theory yields novel results about a variety of A-estimators, providing intuition and formal justification for their success in recent applications.