论文标题
对等联盟学习的持续学习:关于自动脑转移识别的研究
Continual Learning for Peer-to-Peer Federated Learning: A Study on Automated Brain Metastasis Identification
论文作者
论文摘要
由于数据隐私限制,多个中心之间的数据共享受到限制。作为对等人联合学习的一种方法,持续学习可以通过共享中级模型而不是培训数据来促进深度学习算法开发的多中心协作。这项工作旨在调查使用DeepMedic对脑转移识别的示例性应用多中心协作的持续学习的可行性。 920 T1 MRI对比度增强量被拆分以模拟多中心协作方案。持续学习算法,突触智能(SI)用于保留重要的模型权重,以训练一个接一个的中心。在双边协作方案中,持续学习的SI达到了0.917的敏感性,而无需SI的天真持续学习的灵敏度为0.906,而两种模型仅在没有持续学习的情况下训练了内部数据,而无需持续学习的灵敏度仅达到0.853和0.831。在七个中心的多边协作方案中,在没有连续学习的情况下在内部数据集(每个中心100卷)上训练的模型获得了平均灵敏度值为0.699。随着单访问的持续学习(即,在训练期间,共享模型仅访问每个中心一次),在没有SI和SI的情况下,灵敏度分别提高到0.788和0.849。随着迭代持续学习(即共享模型在训练过程中多次重新审视每个中心),灵敏度将进一步提高到0.914,这与使用混合数据进行培训与敏感性相同。我们的实验表明,持续学习可以改善数据有限的中心的脑转移识别性能。这项研究表明,将连续学习应用于多中心协作中的点对点联盟学习的可行性。
Due to data privacy constraints, data sharing among multiple centers is restricted. Continual learning, as one approach to peer-to-peer federated learning, can promote multicenter collaboration on deep learning algorithm development by sharing intermediate models instead of training data. This work aims to investigate the feasibility of continual learning for multicenter collaboration on an exemplary application of brain metastasis identification using DeepMedic. 920 T1 MRI contrast enhanced volumes are split to simulate multicenter collaboration scenarios. A continual learning algorithm, synaptic intelligence (SI), is applied to preserve important model weights for training one center after another. In a bilateral collaboration scenario, continual learning with SI achieves a sensitivity of 0.917, and naive continual learning without SI achieves a sensitivity of 0.906, while two models trained on internal data solely without continual learning achieve sensitivity of 0.853 and 0.831 only. In a seven-center multilateral collaboration scenario, the models trained on internal datasets (100 volumes each center) without continual learning obtain a mean sensitivity value of 0.699. With single-visit continual learning (i.e., the shared model visits each center only once during training), the sensitivity is improved to 0.788 and 0.849 without SI and with SI, respectively. With iterative continual learning (i.e., the shared model revisits each center multiple times during training), the sensitivity is further improved to 0.914, which is identical to the sensitivity using mixed data for training. Our experiments demonstrate that continual learning can improve brain metastasis identification performance for centers with limited data. This study demonstrates the feasibility of applying continual learning for peer-to-peer federated learning in multicenter collaboration.