2023 Fiscal Year Final Research Report
Japanese language processing system by human-like model based on brain and cognitive science
Project/Area Number |
21K18115
|
Research Category |
Grant-in-Aid for Challenging Research (Pioneering)
|
Allocation Type | Multi-year Fund |
Review Section |
Medium-sized Section 2:Literature, linguistics, and related fields
|
Research Institution | Shizuoka University |
Principal Investigator |
|
Co-Investigator(Kenkyū-buntansha) |
酒井 邦嘉 東京大学, 大学院総合文化研究科, 教授 (10251216)
福井 直樹 上智大学, 言語科学研究科, 教授 (60208931)
|
Project Period (FY) |
2021-07-09 – 2024-03-31
|
Keywords | 自然言語処理 / 脳科学 / 認知科学 |
Outline of Final Research Achievements |
The goal of this study is to construct a language processing model that is more human-like by incorporating insights from neuroscience. We designed fMRI experimental tasks based on theoretical linguistics insights and conducted experiments focusing on syntax and word order. Since it is unclear how individual information is processed internally in Transformers and their variants, which are widely used in the AI field, we analyzed their correspondence to linguistic functions. Transformers lack explicit memory and require an exponentially larger amount of training data compared to humans, potentially leading to redundancy, and their internal representations are difficult to interpret. Therefore, we constructed and experimented with a model that forces explicit interpretation of memory in recurrent networks.
|
Free Research Field |
自然言語処理
|
Academic Significance and Societal Importance of the Research Achievements |
人間の仕組みからは遠い人工的なモデルが工学的応用の主流となっている現状に対し、脳科学・認知科学と理論言語学、自然言語処理の接点を見いだし3者の知見を融合した新たなモデルを構築する糸口とすることができた。このような試みでは、より大規模な工学的モデルと個別タスクでの性能比較によってその時々では劣って見えてしまうことが多いが、「より人間に近い言語モデル」という理学的モチベーションは、長期的なブレークスルーの可能性を見据えた「人間との親和性」「説明可能AI」という工学的側面においても意義があり、広く社会的意義のある研究成果といえる。
|