2022 Fiscal Year Final Research Report
Uncertainty in Deep Learning for Undetstanding Data Mechanism and Anomaly Detection
Project/Area Number |
19K20344
|
Research Category |
Grant-in-Aid for Early-Career Scientists
|
Allocation Type | Multi-year Fund |
Review Section |
Basic Section 61030:Intelligent informatics-related
|
Research Institution | Osaka University (2020-2022) Kobe University (2019) |
Principal Investigator |
|
Project Period (FY) |
2019-04-01 – 2023-03-31
|
Keywords | 深層学習 / 不確実性 / 異常検知 / 解釈可能性 |
Outline of Final Research Achievements |
In this study, I addressed the following problems while maintaining the flexibility of deep learning. (a1) I analyzed the output distribution of deep generative models, detected inappropriate generalizations for anomalous data, and proposed "deep learning that knows what it does not know." (a2) By leveraging the above results, I detected data that deep learning insufficient learns, leading to accelerated classification learning, as well as enhanced reliability and interpretability. (b1) By proposing a structured deep generative model, I proposed anomaly detection for new data groups (zero-shot anomaly detection). (b2) I generalized these findings to other generative models, proposing algorithms and model structures for detecting representative anomalies and extracting the semantic meaning of dataset.
|
Free Research Field |
機械学習
|
Academic Significance and Societal Importance of the Research Achievements |
深層学習はデータさえ与えれば様々なタスクを実現できる一方,その信頼性や解釈性に対し疑問が投げかけられており,特に上手くいかない場合にデータを集める以外に解決策がないと考えられてきた.本研究は特に異常検知において,深層学習の柔軟性がむしろデメリットとして働くことを示し,そしてそれを解決するための解析方法を提案した.また深層生成モデルの構造化によって,専門家の知識や解析結果を考慮することが可能であることを示した.これらの成果は,深層学習の持つ根本的な問題点を大きく解決するとともに,人とAI技術が安心して共存していく未来社会の実現の一助となるものである.
|