2018 Fiscal Year Final Research Report
Generalized competitive learning for improving and interpreting neural networks
Project/Area Number |
16K00339
|
Research Category |
Grant-in-Aid for Scientific Research (C)
|
Allocation Type | Multi-year Fund |
Section | 一般 |
Research Field |
Soft computing
|
Research Institution | Tokai University |
Principal Investigator |
Kamimura Ryotaro 東海大学, 情報教育センター, 非常勤講師 (80176643)
|
Project Period (FY) |
2016-04-01 – 2019-03-31
|
Keywords | ニューラルネットワーク / 情報理論 / 汎化能力 / 解釈 / 情報圧縮 |
Outline of Final Research Achievements |
The present study tried to extend the competitive learning methods to more generalized methods. The generalized competitive learning can be used to maximize mutual information between neurons and input patterns, disentangling complex patterns into a set of simple features. Thus, maximized mutual information can be compressed to be represented by the simplest neural networks without hidden layers. Then, it becomes easier to interpret the inference mechanism of complex neural networks by using the simplest networks. Applied to the real business data sets, it was found that the information maximization and compression could be used to create simpler and easily interpretable representations on the relations between inputs and outputs.
|
Free Research Field |
ニューラルネットワーク
|
Academic Significance and Societal Importance of the Research Achievements |
意義は,情報量最大化法の単純化,情報の圧縮,さらに解釈可能なニューラルネットワークの開発の3点に要約できる.まず,これまで最大の問題であったニューラルネットワークの持つ情報量の制御を非常に簡単な競合学習で行うことができることがわかった.また,情報量を圧縮することも容易になり,圧縮された情報量を読み取ることが可能となり,解釈へ応用できる可能性が示された.推論過程の解釈が可能となり,より深く社会に受け入れられる方法へ発展する可能性を示したと考える.
|