A Unified Approach for Explaining Deep Neural Networks
Project/Area Number |
18K18106
|
Research Category |
Grant-in-Aid for Early-Career Scientists
|
Allocation Type | Multi-year Fund |
Review Section |
Basic Section 61030:Intelligent informatics-related
|
Research Institution | Osaka University |
Principal Investigator |
Hara Satoshi 大阪大学, 産業科学研究所, 准教授 (40780721)
|
Project Period (FY) |
2018-04-01 – 2021-03-31
|
Project Status |
Completed (Fiscal Year 2020)
|
Budget Amount *help |
¥4,160,000 (Direct Cost: ¥3,200,000、Indirect Cost: ¥960,000)
Fiscal Year 2019: ¥1,820,000 (Direct Cost: ¥1,400,000、Indirect Cost: ¥420,000)
Fiscal Year 2018: ¥2,340,000 (Direct Cost: ¥1,800,000、Indirect Cost: ¥540,000)
|
Keywords | 機械学習 / 深層学習 / 説明可能AI / 解釈性 / 人工知能 |
Outline of Final Research Achievements |
Deep neural network models are inherently complex, which hinder us from inferring the underlying mechanisms or the evidences that the models rely on when making decisions. It is therefore essential to develop "explanation methods" that can reveal such mechanism or evidence so that we can understand the decisions of the models. In this research, we focused on a unification of the popular explanation methods, the explanation by important features and the explanation by similar/relevant instances. Through the research, we deepen and improved the methodologies for each explanations individually, and we then developed a unification framework that can taken into account the advantages of the both of the explanations.
|
Academic Significance and Societal Importance of the Research Achievements |
深層学習モデルは高い予測・認識精度を誇る一方で、一般にとても複雑な構造をしており、モデルの判断根拠をユーザが窺い知ることは困難である。このため、深層学習モデルは一般に”ブラックボックス”とされる。”ブラックボックス”性のために深層学習モデルをそのまま人間の重要な意思決定の補助(e.g ローン審査や医療診断など)に用いることは困難である。本研究で開発した説明法はこのような深層学習モデルの”ブラックボックス”性を緩和することができる。これにより、ユーザは高精度な深層学習モデルを、その判断根拠を窺いながら意思決定補助に用いることができるようになる。
|
Report
(4 results)
Research Products
(13 results)