论文标题

自动界定泰勒剩余系列:更紧密的界限和新应用程序

Automatically Bounding the Taylor Remainder Series: Tighter Bounds and New Applications

论文作者

Streeter, Matthew, Dillon, Joshua V.

论文摘要

我们提出了一种新算法,用于自动界定泰勒的剩余系列。在标量函数的特殊情况下,$ f:\ mathbb {r} \ to \ mathbb {r} $,我们的算法以输入为输入参考点$ x_0 $,trust区域$ [a,b] $和integer $ k \ ge ge 1 $,并返回$ i $ f(x) - x) - x) - \ k -k -k -k -k -k -k -k -k -k -k -k -k -k -k -k -k -k -k -k -k -s}} - \ k -k -k -k -k -k -k -k -k -k -0 = 0 = 0 = 0. {1} {i!} f^{(i)}(x_0)(x -x_0)^i \ in I(x -x_0)^k $ for in [a,b] $ in [a,b] $ in [a,x_0)^k $。与自动分化一样,函数$ f $以符号形式提供给算法,必须由已知的原子函数组成。 在高水平上,我们的算法有两个步骤。首先,对于各种常用的基本功能(例如$ \ exp $,$ \ log $),我们使用最近开发的理论来推导泰勒剩余系列的尖锐多项式上限和下限。然后,我们使用泰勒模式自动分化的间隔算术变体递归地结合基本函数的边界。我们的算法可以有效利用机器学习硬件加速器,并且我们在JAX中提供了开源实现。 然后,我们将注意力转向应用。最值得注意的是,在伴侣论文中,我们使用新机械来创建第一个通用的多数化最小化优化算法:使用自动衍生而不是手动衍生的主要损失迭代最小化任意损失的算法。我们还表明,我们的自动衍生界限可用于验证的全局优化和数值集成,并证明Jensen不平等的更清晰版本。

We present a new algorithm for automatically bounding the Taylor remainder series. In the special case of a scalar function $f: \mathbb{R} \to \mathbb{R}$, our algorithm takes as input a reference point $x_0$, trust region $[a, b]$, and integer $k \ge 1$, and returns an interval $I$ such that $f(x) - \sum_{i=0}^{k-1} \frac {1} {i!} f^{(i)}(x_0) (x - x_0)^i \in I (x - x_0)^k$ for all $x \in [a, b]$. As in automatic differentiation, the function $f$ is provided to the algorithm in symbolic form, and must be composed of known atomic functions. At a high level, our algorithm has two steps. First, for a variety of commonly-used elementary functions (e.g., $\exp$, $\log$), we use recently-developed theory to derive sharp polynomial upper and lower bounds on the Taylor remainder series. We then recursively combine the bounds for the elementary functions using an interval arithmetic variant of Taylor-mode automatic differentiation. Our algorithm can make efficient use of machine learning hardware accelerators, and we provide an open source implementation in JAX. We then turn our attention to applications. Most notably, in a companion paper we use our new machinery to create the first universal majorization-minimization optimization algorithms: algorithms that iteratively minimize an arbitrary loss using a majorizer that is derived automatically, rather than by hand. We also show that our automatically-derived bounds can be used for verified global optimization and numerical integration, and to prove sharper versions of Jensen's inequality.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源