• Search Research Projects
  • Search Researchers
  • How to Use
  1. Back to project page

2021 Fiscal Year Research-status Report

Explainable Artificial Intelligence for Medical Applications

Research Project

Project/Area Number 21K17764
Research InstitutionOsaka University

Principal Investigator

李 良知  大阪大学, データビリティフロンティア機構, 特任助教(常勤) (10875545)

Project Period (FY) 2021-04-01 – 2023-03-31
KeywordsExplainable AI / Computer Vision / Medical Images / Deep Learning / Image Classification / Visual Explanation / Computer-aided Diagnosis / Trustable AI
Outline of Annual Research Achievements

To fully enable trustworthy AI for medicine and healthcare, this project aims to design an explainable AI model that can give diagnosis results along with precise and bifunctional visual explanations to support its decisions. In FY 2021, I mainly studied the following sub-topics towards the goal.
1. A self-attention-based classifier that has the ability to conduct intrinsically-explainable inference.
2. A loss function for controlling the size of explanations. I design a dedicated loss named explanation loss, which is used to control the overall explanation size, region number, etc., of the visual explanations.
3. Collaborating sub-networks to output positive and negative explanations simultaneously.
The results are presented in the IEEE ICCV 2021 conference.

Current Status of Research Progress
Current Status of Research Progress

1: Research has progressed more than it was originally planned.

Reason

I planned to study the following sub-topic in FY 2021 and finished it as scheduled.
1. A self-attention-based classifier.
2. A loss function for controlling the size of explanations.
3. Collaborating sub-networks.
In addition, I also get some results from other two sub-topics.
1. Tests on medical diagnosis, including glaucoma, artery hardening, etc.

Strategy for Future Research Activity

1. I plan to extend the work to an explainable regressor, which can deal with regression tasks like disease severity estimation

2. This project will also involve an evaluation of the explanation quality. The evaluation will be performed by medical professionals and will serve as feedback to improve the design of explanation loss.

3. I plan to include an evaluation of machine teaching, in which the proposed method will be used to teach humans using the generated visual explanations. Machine teaching is a commonly-used evaluation in the XAI field and the teaching effects will be measured by the test scores of the participants.

Causes of Carryover

Reasons for Incurring Amount to be Used Next Fiscal Year: Many international conferences have been cancelled due to the pandemic. So the fee for
travel and registration are not used.
Usage Plan: In the next year, the funding will be used for buying the workstation and other hardwares to build the prototype system, and the registration fee of the conferences that will be held online next year.

  • Research Products

    (1 results)

All 2021

All Presentation (1 results) (of which Int'l Joint Research: 1 results)

  • [Presentation] SCOUTER: Slot Attention-based Classifier for Explainable Image Recognition2021

    • Author(s)
      Liangzhi Li, Bowen Wang, Manisha Verma, Yuta Nakashima, Ryo Kawasaki, Hajime Nagahara
    • Organizer
      IEEE/CVF International Conference on Computer Vision (ICCV)
    • Int'l Joint Research

URL: 

Published: 2022-12-28  

Information User Guide FAQ News Terms of Use Attribution of KAKENHI

Powered by NII kakenhi