研究実績の概要 |
To fully enable trustworthy AI for medicine and healthcare, this project aims to design an explainable AI model that can give diagnosis results along with precise and bifunctional visual explanations to support its decisions. In this project and FY 2022, I continued to study the following sub-topics towards the goal. 1. A self-attention-based classifier that has the ability to conduct intrinsically-explainable inference. 2. A loss function for controlling the size of explanations. I design a dedicated loss named explanation loss, which is used to control the overall explanation size, region number, etc., of the visual explanations. 3. Collaborating sub-networks to output positive and negative explanations simultaneously. The results are mainly presented in IEEE CVPR 2023.
|