论文标题

深度神经网络表达性,以实现最佳停止问题

Deep neural network expressivity for optimal stopping problems

论文作者

Gonon, Lukas

论文摘要

本文研究了高维状态空间上离散时间马尔可夫流程的最佳停止问题的最佳停止神经网络表达率。建立了一个通用框架,其中最佳停止问题的值函数和持续值可以通过$ \ varepsilon $近似,最多可以通过$ \ varepsilon $,最多可通过$κD^{\ mathfrak {q}}} \ varepsilon^{\ varepsilon^{ - \ m m mathfrak^{ - \ m mathfrak { - \ mathfrak {r} $。常数$κ,\ mathfrak {q},\ mathfrak {r} \ geq 0 $不取决于状态空间的尺寸$ d $或近似精度$ \ varepsilon $。这证明了深层神经网络在解决最佳停止问题时不会遭受维度的诅咒。该框架涵盖了指数的莱维模型,离散的扩散过程及其运行最小值和最大值。这些结果在数学上证明了使用深神经网络在数值上解决最佳停止问题并在高维度上定价美国选择。

This article studies deep neural network expression rates for optimal stopping problems of discrete-time Markov processes on high-dimensional state spaces. A general framework is established in which the value function and continuation value of an optimal stopping problem can be approximated with error at most $\varepsilon$ by a deep ReLU neural network of size at most $κd^{\mathfrak{q}} \varepsilon^{-\mathfrak{r}}$. The constants $κ,\mathfrak{q},\mathfrak{r} \geq 0$ do not depend on the dimension $d$ of the state space or the approximation accuracy $\varepsilon$. This proves that deep neural networks do not suffer from the curse of dimensionality when employed to solve optimal stopping problems. The framework covers, for example, exponential Lévy models, discrete diffusion processes and their running minima and maxima. These results mathematically justify the use of deep neural networks for numerically solving optimal stopping problems and pricing American options in high dimensions.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源