拡張現実感を用いた空間認識力補完による腹腔鏡下手術支援方式
Project/Area Number |
23K16917
|
Research Category |
Grant-in-Aid for Early-Career Scientists
|
Allocation Type | Multi-year Fund |
Review Section |
Basic Section 61020:Human interface and interaction-related
|
Research Institution | University of Tsukuba |
Principal Investigator |
謝 淳 筑波大学, 計算科学研究センター, 助教 (00913287)
|
Project Period (FY) |
2023-04-01 – 2025-03-31
|
Project Status |
Granted (Fiscal Year 2023)
|
Budget Amount *help |
¥4,680,000 (Direct Cost: ¥3,600,000、Indirect Cost: ¥1,080,000)
Fiscal Year 2024: ¥2,340,000 (Direct Cost: ¥1,800,000、Indirect Cost: ¥540,000)
Fiscal Year 2023: ¥2,340,000 (Direct Cost: ¥1,800,000、Indirect Cost: ¥540,000)
|
Keywords | 外科手術支援 / 深層学習 / 医療画像処理 / 拡張現実 / 腹腔鏡 / 手術支援 / 三次元センシング |
Outline of Research at the Start |
腹腔鏡手術では、カメラ視野が狭いことによって、術野付近での空間認識が難しいという問題が生じている。本研究は、腹腔鏡映像と投影型拡張現実技術を利用し、腹部体表を透視して腹腔内が直接見えるような観察法を実現することで、腹腔鏡手術の作業効率向上を目指す。 本研究は主に以下二つの課題について取り組む。①術者の空間認識力を補助可能なSAR手術環境の構築、②腹腔鏡画像に基づいた体内三次元データの生成、③実現場における性能評価。本研究は、従来の提示方式の問題点を解決することに加え、腹腔鏡手術支援技術やアプリケーションの開発・評価の基盤技術となり得る。
|
Outline of Annual Research Achievements |
1.While surgical videos are crucial for training young surgeons, traditional video capturing methods often suffer from significant occlusions caused by the movements of surgeons' heads and hands. Hence the surgical field, which is important for the transmission of procedural skills, is often invisible in the videos. To solve this problem, we proposes a multi-view capturing system formed by a ring-shape camera array. The occlusion can be avoided by shifting the viewpoints between cameras views. We also inducted bullet-time video technology to realize smooth and intuitive camera switching. Clinical experiments in the operating room are conducted to verify the effectiveness of the capturing method against the occlusion problem.
2.Intraoperative fluoroscopy is a frequently used modality in minimally invasive orthopedic surgeries. Aligning the intraoperatively acquired X-ray image with the preoperatively acquired 3D model of a computed tomography (CT) scan reduces the mental burden on surgeons induced by the overlapping anatomical structures in the acquired images. This paper proposes a fully automatic registration method that is robust to extreme viewpoints and does not require manual annotation of landmark points during training. It is based on a fully convolutional neural network (CNN) that regresses the scene coordinates for a given X-ray image. The scene coordinates are defined as the intersection of the back-projected rays from a pixel toward the 3D model.
|
Current Status of Research Progress |
Current Status of Research Progress
2: Research has progressed on the whole more than it was originally planned.
Reason
We worked on two surgical suporting systems in the laset year and published our works on international conferences. Although they are not highly realtive to laparoscopy, both of them are intend to improve the surgons spatial awarness and the knowledge behind them can be transfered to laparoscopic surgery in the future.
|
Strategy for Future Research Activity |
Radiography, a prevalent technique for visualizing internal human anatomy, employs high-energy X-rays to penetrate the body. The residual radiation energy is captured on a flat detector. Since various organs absorb X-rays differently, the measured energy is translated into a two-dimensional image, known as a radiograph or X-ray image, which discloses the body's internal configuration and offers crucial diagnostic data. X-ray images are both rapid and cost-effective; however, the use of high-energy radiation may have detrimental health implications. Typically, in procedures like chest radiography, only a single frontal view is obtained per session. Although physicians can instinctively interpret the spatial arrangement of organs on a 2D radiograph in a three-dimensional context, this intuition is inherently subjective and can vary in precision. The development of an X-ray image view synthesis algorithm could be beneficial, providing additional insight into a patient's internal anatomy. Furthermore, this could facilitate other applications, including sparse-view CT reconstruction from X-ray images, bridging the gap between CT and X-ray and enhancing the utility of radiographic imaging. Diffusion models have shown extremely high performance in image generation tasks. It is also reported that with proper fine-tuning, a pre-trained model can generate realistic medical X-ray images from given text prompts. We hereby consider that with a carefully designed conditioning approach, it is possible to generate novel view X-ray images from a source image and a target viewport.
|
Report
(1 results)
Research Products
(3 results)