论文标题
对患者级预测的患者人群图表的无监督预培训
Unsupervised Pre-Training on Patient Population Graphs for Patient-Level Predictions
论文作者
论文摘要
预训练在机器学习的不同领域表现出成功,例如计算机视觉(CV),自然语言处理(NLP)和医学成像。但是,尚未完全探索用于临床数据分析。即使记录了大量的电子健康记录(EHR)数据,但如果数据收集到小型医院或处理罕见疾病的交易,数据和标签也可能稀缺。在这种情况下,对较大的EHR数据进行预训练可以改善模型性能。在本文中,我们将无监督的预培训应用于异质的多模式EHR数据,以预测患者。为了对这些数据进行建模,我们利用大量的图形对人群图。我们首先设计了基于图形变压器的网络体系结构,旨在处理EHR数据中发生的各种输入特征类型,例如连续,离散和时间序列特征,从而允许更好的多模式数据融合。此外,我们设计基于蒙版的插入方法的预训练方法,以在对不同的最终任务进行微调之前对网络进行预培训。预训练是以一种完全无监督的方式进行的,这为未来具有不同任务和类似方式的大型公共数据集预先培训奠定了基础。我们在两个患者记录的医学数据集上测试了我们的方法,即t和Mimic-III,包括成像和非成像功能以及不同的预测任务。我们发现,我们提出的基于图形的预训练方法有助于在人群水平上对数据进行建模,并进一步改善Mimic的AUC的绩效,以平均为AUC,而Tadpole的数据平均提高了4.15%。
Pre-training has shown success in different areas of machine learning, such as Computer Vision (CV), Natural Language Processing (NLP) and medical imaging. However, it has not been fully explored for clinical data analysis. Even though an immense amount of Electronic Health Record (EHR) data is recorded, data and labels can be scarce if the data is collected in small hospitals or deals with rare diseases. In such scenarios, pre-training on a larger set of EHR data could improve the model performance. In this paper, we apply unsupervised pre-training to heterogeneous, multi-modal EHR data for patient outcome prediction. To model this data, we leverage graph deep learning over population graphs. We first design a network architecture based on graph transformer designed to handle various input feature types occurring in EHR data, like continuous, discrete, and time-series features, allowing better multi-modal data fusion. Further, we design pre-training methods based on masked imputation to pre-train our network before fine-tuning on different end tasks. Pre-training is done in a fully unsupervised fashion, which lays the groundwork for pre-training on large public datasets with different tasks and similar modalities in the future. We test our method on two medical datasets of patient records, TADPOLE and MIMIC-III, including imaging and non-imaging features and different prediction tasks. We find that our proposed graph based pre-training method helps in modeling the data at a population level and further improves performance on the fine tuning tasks in terms of AUC on average by 4.15% for MIMIC and 7.64% for TADPOLE.