论文标题

超声图像中肌肉 - 弯头连接跟踪的以人为中心的机器学习方法

A Human-Centered Machine-Learning Approach for Muscle-Tendon Junction Tracking in Ultrasound Images

论文作者

Leitner, Christoph, Jarolim, Robert, Englmair, Bernhard, Kruse, Annika, Hernandez, Karen Andrea Lara, Konrad, Andreas, Su, Eric, Schröttner, Jörg, Kelly, Luke A., Lichtwark, Glen A., Tilp, Markus, Baumgartner, Christian

论文摘要

生物力学和临床步态研究观察到四肢肌肉和肌腱研究其功能和行为。因此,经常测量不同解剖学地标的运动,例如肌肉 - 刺孔连接。我们提出了一种可靠且有效的机器学习方法,以在超声视频中跟踪这些连接,并支持步态分析中的临床生物力学。为了促进此过程,引入了基于深度学习的方法。我们收集了一个广泛的数据集,涵盖了3个功能运动,2个肌肉,在123个健康和38个受损受试者中收集了3种不同的超声系统,并在我们的网络培训中提供了66864个注释的超声图像。此外,我们使用了跨独立实验室收集的数据,并由具有不同经验水平的研究人员策划。为了评估我们的方法,选择了由四个专家独立验证的多种测试集。我们表明,我们的模型在识别肌肉弯头连接位置方面达到了与四名人类专家的相似性能得分。我们的方法提供了对肌肉 - 倾斜连接的及时跟踪,预测时间为每帧0.078秒(大约比手动标记快的100倍)。我们的所有代码,训练有素的模型和测试集都可以公开使用,我们的模型作为免费使用的在线服务提供了https://deepmtj.org/。

Biomechanical and clinical gait research observes muscles and tendons in limbs to study their functions and behaviour. Therefore, movements of distinct anatomical landmarks, such as muscle-tendon junctions, are frequently measured. We propose a reliable and time efficient machine-learning approach to track these junctions in ultrasound videos and support clinical biomechanists in gait analysis. In order to facilitate this process, a method based on deep-learning was introduced. We gathered an extensive dataset, covering 3 functional movements, 2 muscles, collected on 123 healthy and 38 impaired subjects with 3 different ultrasound systems, and providing a total of 66864 annotated ultrasound images in our network training. Furthermore, we used data collected across independent laboratories and curated by researchers with varying levels of experience. For the evaluation of our method a diverse test-set was selected that is independently verified by four specialists. We show that our model achieves similar performance scores to the four human specialists in identifying the muscle-tendon junction position. Our method provides time-efficient tracking of muscle-tendon junctions, with prediction times of up to 0.078 seconds per frame (approx. 100 times faster than manual labeling). All our codes, trained models and test-set were made publicly available and our model is provided as a free-to-use online service on https://deepmtj.org/.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源