从深度到解剖:基于体表深度图像学习内部器官定位 / Depth to Anatomy: Learning Internal Organ Locations from Surface Depth Images
1️⃣ 一句话总结
这篇论文提出了一种基于深度学习的方法,能够仅通过一张人体表面的二维深度图像,直接预测出多个内部器官的三维位置和形状,从而有望实现自动化、更精准的医疗扫描患者摆位。
Automated patient positioning plays an important role in optimizing scanning procedure and improving patient throughput. Leveraging depth information captured by RGB-D cameras presents a promising approach for estimating internal organ positions, thereby enabling more accurate and efficient positioning. In this work, we propose a learning-based framework that directly predicts the 3D locations and shapes of multiple internal organs from single 2D depth images of the body surface. Utilizing a large-scale dataset of full-body MRI scans, we synthesize depth images paired with corresponding anatomical segmentations to train a unified convolutional neural network architecture. Our method accurately localizes a diverse set of anatomical structures, including bones and soft tissues, without requiring explicit surface reconstruction. Experimental results demonstrate the potential of integrating depth sensors into radiology workflows to streamline scanning procedures and enhance patient experience through automated patient positioning.
从深度到解剖:基于体表深度图像学习内部器官定位 / Depth to Anatomy: Learning Internal Organ Locations from Surface Depth Images
这篇论文提出了一种基于深度学习的方法,能够仅通过一张人体表面的二维深度图像,直接预测出多个内部器官的三维位置和形状,从而有望实现自动化、更精准的医疗扫描患者摆位。
源自 arXiv: 2601.18260