论文标题

不确定性定量的转移学习:对源到目标的随机效果校准(重铸)

Transfer Learning with Uncertainty Quantification: Random Effect Calibration of Source to Target (RECaST)

论文作者

Hickey, Jimmy, Williams, Jonathan P., Hector, Emily C.

论文摘要

转移学习使用一个数据模型,经过培训,可以对一个人群的数据进行预测或推断,从而对另一个人群的数据进行可靠的预测或推断。大多数现有的转移学习方法都是基于微调预训练的神经网络模型,并且无法提供关键的不确定性量化。我们为基于转移学习的模型预测开发一个统计框架,称为重铸。主要机制是一种cauchy随机效应,将源模型重新校准为目标人群。我们从数学上和经验上证明了我们在线性模型之间转移学习的重铸方法的有效性,从某种意义上说,预测集将实现其名义陈述的覆盖范围,并且我们从数值上说明了该方法对非线性模型的渐近近似值的稳健性。尽管许多现有技术是在特定源模型上构建的,但重铸不可否认,不可接受源模型的选择。例如,我们的重铸传输学习方法可以应用于具有线性或逻辑回归,深度神经网络体系结构等的连续或离散数据模型。此外,重铸提供了预测的不确定性量化,这在文献中主要不存在。我们在模拟研究中检查了我们方法的性能,并应用于实际医院数据。

Transfer learning uses a data model, trained to make predictions or inferences on data from one population, to make reliable predictions or inferences on data from another population. Most existing transfer learning approaches are based on fine-tuning pre-trained neural network models, and fail to provide crucial uncertainty quantification. We develop a statistical framework for model predictions based on transfer learning, called RECaST. The primary mechanism is a Cauchy random effect that recalibrates a source model to a target population; we mathematically and empirically demonstrate the validity of our RECaST approach for transfer learning between linear models, in the sense that prediction sets will achieve their nominal stated coverage, and we numerically illustrate the method's robustness to asymptotic approximations for nonlinear models. Whereas many existing techniques are built on particular source models, RECaST is agnostic to the choice of source model. For example, our RECaST transfer learning approach can be applied to a continuous or discrete data model with linear or logistic regression, deep neural network architectures, etc. Furthermore, RECaST provides uncertainty quantification for predictions, which is mostly absent in the literature. We examine our method's performance in a simulation study and in an application to real hospital data.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源