2020 Fiscal Year Research-status Report
2D-3D Reconstruction for internal organs using Deep Learning Techniques
Project/Area Number |
20K20167
|
Research Institution | Osaka University |
Principal Investigator |
武 淑瓊 大阪大学, 産業科学研究所, 助教 (30775763)
|
Project Period (FY) |
2020-04-01 – 2023-03-31
|
Keywords | 3D organ reconstruction / deep Learning / 2D-3D reconstruction |
Outline of Annual Research Achievements |
The research plan of this year FY2020 is to collect real data, and augment the data which are required for machine learning methods. In this year, we collected CT data from 130 patients of the Kyoto University Hospital. For each CT data, we augmented its DRR (Digitally Reconstructed Radiography) every 0.5 degree (720 views). These augmented DRR images were used as simulated X-ray images in our experiments. Then we achieved the reconstruction from sparse-view CT (60 views; every 6 degree) to dense-view CT (720 views; every 0.5) using U-net architecture. Finally, we researched the most recent machine learning approaches such as 3D U-net and cycle GAN (Generative Adversarial Network), which could contribute to our 2D-3D reconstruction. We also coded the programs of the two models. The work have done in this year is important because it provided the data for machine learning and realized less-to-more view reconstruction. This is a basic achievement, and we can approach to our goal by reducing the 60 views to one view. The research on new published machine learning methods is also useful, because it provides the solution for improving the performance of 2D-3D reconstruction. Although we planned to use the CNN for training, and SDM (statistical deformation model) for data augmentation in our proposal, other deep learning methods such as GAN and U-net were proved more efficient in solving the same issues. Therefore, we used the U-net in our current research, and plan to used GAN in our future research.
|
Current Status of Research Progress |
Current Status of Research Progress
2: Research has progressed on the whole more than it was originally planned.
Reason
The real data were collected as we planed, and the 2D-3D pair data for machine learning were augmented using a DRR (Digitally Reconstructed Radiography) technology. We also planned to use the statistical deformation model (SDM) to augment the 3D CT models, and then use the augmented 3D model to create multi-view DRR (simulated 2D X-ray image). However, as the deep learning methods were proved much more efficient than the traditional models such as statistical deformation model, we changed the data augmentation method from traditional techniques to the deep learning approaches.
|
Strategy for Future Research Activity |
We used DRR as a simulated X-ray image in our experiment. However, it is different from the real X-ray image. To put our research into practice, which can reconstruct 3D CT data from 2D X-ray image during a surgery, we plan to transform the augmented DRR images to X-ray images. As we researched the most recent publications, we found that the cycle-GAN (Generative Adversarial Network) was proved most promising to solve this kind of problem. Also, we planned to use statistical deformation model to augment 3D models in our research proposal. However, as the deep learning techniques developed rapidly in recent years, we plan to use the deep learning methods such as GAN or U-net to replace the traditional method for data augmentation because deep learning methods are more efficient.
|
Causes of Carryover |
人件費・謝金: 1,000,000;機械学習で使うGPUの購入:1,000,000;他の消耗品:200,000;論文掲載費:200,000 プログラムの整理やデータの整理などをRAの学生さんにお願いしたいです。後、論文発表の際、英語校正も必要です。まとめて、人件費の目安は1,000,000です。深層学習を実施するため、GPUが必要です。そして、論文掲載の費用と他の消耗品(マウスやキーボードやハードディスクなど)も必要です。 現在コロナで発表が全てオンラインになると思いますので、旅費は0にしました。コロナが今年で終わった場合、出張が必要になるかもしれないので、その際、旅費を調整させて頂きます。
|