论文标题

层析成像中基于图像的运动估计的外观学习

Appearance Learning for Image-based Motion Estimation in Tomography

论文作者

Preuhs, Alexander, Manhart, Michael, Roser, Philipp, Hoppe, Elisabeth, Huang, Yixing, Psychogios, Marios, Kowarschik, Markus, Maier, Andreas

论文摘要

在断层成像中,通过将伪内向模型应用于获得的信号来重建解剖结构。此过程中的几何信息通常仅取决于系统设置,即。例如,扫描仪位置或读数方向。因此,患者运动会破坏重建过程中的几何对齐,从而导致运动伪像。我们提出了一种外观学习方法,以独立于扫描的物体而识别刚性运动的结构。为此,我们训练一个暹罗三重态网络,以预测完整获取的再投影误差(RPE),以及在多任务学习方法中从重建卷中的单个视图中沿单个视图的RPE分布的近似分布。 RPE根据训练期间可用的虚拟标记位置来测量运动诱导的几何偏差与对象无关。我们使用27名患者训练我们的网络,并进行21-4-2的分配进行培训,验证和测试。平均而言,我们达到的剩余平均RPE为0.013mm,患者间标准偏差为0.022 mm。与先前发布的结果相比,这是准确性的两倍。在运动估计基准中,与在十二个实验中有九个最新的措施相比,提出的方法取得了较高的结果。该方法的临床适用性在受运动影响的临床数据集上证明。

In tomographic imaging, anatomical structures are reconstructed by applying a pseudo-inverse forward model to acquired signals. Geometric information within this process is usually depending on the system setting only, i. e., the scanner position or readout direction. Patient motion therefore corrupts the geometry alignment in the reconstruction process resulting in motion artifacts. We propose an appearance learning approach recognizing the structures of rigid motion independently from the scanned object. To this end, we train a siamese triplet network to predict the reprojection error (RPE) for the complete acquisition as well as an approximate distribution of the RPE along the single views from the reconstructed volume in a multi-task learning approach. The RPE measures the motioninduced geometric deviations independent of the object based on virtual marker positions, which are available during training. We train our network using 27 patients and deploy a 21-4-2 split for training, validation and testing. In average, we achieve a residual mean RPE of 0.013mm with an inter-patient standard deviation of 0.022 mm. This is twice the accuracy compared to previously published results. In a motion estimation benchmark the proposed approach achieves superior results in comparison with two state-of-the-art measures in nine out of twelve experiments. The clinical applicability of the proposed method is demonstrated on a motion-affected clinical dataset.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源