• 研究課題をさがす
  • 研究者をさがす
  • KAKENの使い方
  1. 課題ページに戻る

2021 年度 実施状況報告書

Explainable Artificial Intelligence for Medical Applications

研究課題

研究課題/領域番号 21K17764
研究機関大阪大学

研究代表者

李 良知  大阪大学, データビリティフロンティア機構, 特任助教(常勤) (10875545)

研究期間 (年度) 2021-04-01 – 2023-03-31
キーワードExplainable AI / Computer Vision / Medical Images / Deep Learning / Image Classification / Visual Explanation / Computer-aided Diagnosis / Trustable AI
研究実績の概要

To fully enable trustworthy AI for medicine and healthcare, this project aims to design an explainable AI model that can give diagnosis results along with precise and bifunctional visual explanations to support its decisions. In FY 2021, I mainly studied the following sub-topics towards the goal.
1. A self-attention-based classifier that has the ability to conduct intrinsically-explainable inference.
2. A loss function for controlling the size of explanations. I design a dedicated loss named explanation loss, which is used to control the overall explanation size, region number, etc., of the visual explanations.
3. Collaborating sub-networks to output positive and negative explanations simultaneously.
The results are presented in the IEEE ICCV 2021 conference.

現在までの達成度 (区分)
現在までの達成度 (区分)

1: 当初の計画以上に進展している

理由

I planned to study the following sub-topic in FY 2021 and finished it as scheduled.
1. A self-attention-based classifier.
2. A loss function for controlling the size of explanations.
3. Collaborating sub-networks.
In addition, I also get some results from other two sub-topics.
1. Tests on medical diagnosis, including glaucoma, artery hardening, etc.

今後の研究の推進方策

1. I plan to extend the work to an explainable regressor, which can deal with regression tasks like disease severity estimation

2. This project will also involve an evaluation of the explanation quality. The evaluation will be performed by medical professionals and will serve as feedback to improve the design of explanation loss.

3. I plan to include an evaluation of machine teaching, in which the proposed method will be used to teach humans using the generated visual explanations. Machine teaching is a commonly-used evaluation in the XAI field and the teaching effects will be measured by the test scores of the participants.

次年度使用額が生じた理由

Reasons for Incurring Amount to be Used Next Fiscal Year: Many international conferences have been cancelled due to the pandemic. So the fee for
travel and registration are not used.
Usage Plan: In the next year, the funding will be used for buying the workstation and other hardwares to build the prototype system, and the registration fee of the conferences that will be held online next year.

  • 研究成果

    (1件)

すべて 2021

すべて 学会発表 (1件) (うち国際学会 1件)

  • [学会発表] SCOUTER: Slot Attention-based Classifier for Explainable Image Recognition2021

    • 著者名/発表者名
      Liangzhi Li, Bowen Wang, Manisha Verma, Yuta Nakashima, Ryo Kawasaki, Hajime Nagahara
    • 学会等名
      IEEE/CVF International Conference on Computer Vision (ICCV)
    • 国際学会

URL: 

公開日: 2022-12-28  

サービス概要 検索マニュアル よくある質問 お知らせ 利用規程 科研費による研究の帰属

Powered by NII kakenhi