2022 Fiscal Year Annual Research Report
Explainable Artificial Intelligence for Medical Applications
Project/Area Number |
21K17764
|
Research Institution | Osaka University |
Principal Investigator |
李 良知 大阪大学, データビリティフロンティア機構, 特任助教(常勤) (10875545)
|
Project Period (FY) |
2021-04-01 – 2023-03-31
|
Keywords | Explainable AI / Computer Vision / Medical Images / Deep Learning / Image Classification / Visual Explanation / Computer-aided Diagnosis / Trustable AI |
Outline of Annual Research Achievements |
To fully enable trustworthy AI for medicine and healthcare, this project aims to design an explainable AI model that can give diagnosis results along with precise and bifunctional visual explanations to support its decisions. In this project and FY 2022, I continued to study the following sub-topics towards the goal. 1. A self-attention-based classifier that has the ability to conduct intrinsically-explainable inference. 2. A loss function for controlling the size of explanations. I design a dedicated loss named explanation loss, which is used to control the overall explanation size, region number, etc., of the visual explanations. 3. Collaborating sub-networks to output positive and negative explanations simultaneously. The results are mainly presented in IEEE CVPR 2023.
|
Research Products
(3 results)