• Search Research Projects
  • Search Researchers
  • How to Use
  1. Back to project page

2020 Fiscal Year Final Research Report

A Unified Approach for Explaining Deep Neural Networks

Research Project

  • PDF
Project/Area Number 18K18106
Research Category

Grant-in-Aid for Early-Career Scientists

Allocation TypeMulti-year Fund
Review Section Basic Section 61030:Intelligent informatics-related
Research InstitutionOsaka University

Principal Investigator

Hara Satoshi  大阪大学, 産業科学研究所, 准教授 (40780721)

Project Period (FY) 2018-04-01 – 2021-03-31
Keywords機械学習 / 深層学習 / 説明可能AI
Outline of Final Research Achievements

Deep neural network models are inherently complex, which hinder us from inferring the underlying mechanisms or the evidences that the models rely on when making decisions. It is therefore essential to develop "explanation methods" that can reveal such mechanism or evidence so that we can understand the decisions of the models. In this research, we focused on a unification of the popular explanation methods, the explanation by important features and the explanation by similar/relevant instances. Through the research, we deepen and improved the methodologies for each explanations individually, and we then developed a unification framework that can taken into account the advantages of the both of the explanations.

Free Research Field

機械学習

Academic Significance and Societal Importance of the Research Achievements

深層学習モデルは高い予測・認識精度を誇る一方で、一般にとても複雑な構造をしており、モデルの判断根拠をユーザが窺い知ることは困難である。このため、深層学習モデルは一般に”ブラックボックス”とされる。”ブラックボックス”性のために深層学習モデルをそのまま人間の重要な意思決定の補助(e.g ローン審査や医療診断など)に用いることは困難である。本研究で開発した説明法はこのような深層学習モデルの”ブラックボックス”性を緩和することができる。これにより、ユーザは高精度な深層学習モデルを、その判断根拠を窺いながら意思決定補助に用いることができるようになる。

URL: 

Published: 2022-01-27  

Information User Guide FAQ News Terms of Use Attribution of KAKENHI

Powered by NII kakenhi