2022 Fiscal Year Final Research Report
Explainable Artificial Intelligence for Medical Applications
Project/Area Number |
21K17764
|
Research Category |
Grant-in-Aid for Early-Career Scientists
|
Allocation Type | Multi-year Fund |
Review Section |
Basic Section 61010:Perceptual information processing-related
|
Research Institution | Osaka University |
Principal Investigator |
LI LIANGZHI 大阪大学, データビリティフロンティア機構, 特任助教(常勤) (10875545)
|
Project Period (FY) |
2021-04-01 – 2023-03-31
|
Keywords | Explainable AI / Computer Vision / Medical Images / Deep Learning / Image Classification / Visual Explanation / Computer-aided Diagnosis / Trustable AI |
Outline of Final Research Achievements |
To fully enable trustworthy AI for medicine and healthcare, this project aims to design an explainable AI model that can give diagnosis results along with precise and bifunctional visual explanations to support its decisions. In this project, I mainly studied the following sub-topics towards the goal: 1. A self-attention-based classifier that has the ability to conduct intrinsically-explainable inference; 2. A loss function for controlling the size of explanations. I design a dedicated loss named explanation loss, which is used to control the overall explanation size, region number, etc., of the visual explanations; 3. Collaborating sub-networks to output positive and negative explanations simultaneously.
The results are mainly presented in top conferences like IEEE ICCV and IEEE CVPR.
|
Free Research Field |
Explainable AI
|
Academic Significance and Societal Importance of the Research Achievements |
機械の判断に正確な理由を示すことができる技術の確立は、医療分野における説明可能なコンピュータ支援診断(CAD)や、病気の症状を認識する方法や手術を専門家のように実行する方法などのスキルを学生に教えるマシンティーチングシステム、また病気の形態/生理学的状態に関する患者の質問に答えるための医療ビジュアルクエスチョンアンサリング(VQA)など、様々な医療AIアプリケーションの向上につながります。さらに、この研究には一般的なAI研究の改善の可能性があります。
|