• Search Research Projects
  • Search Researchers
  • How to Use
  1. Back to project page

2023 Fiscal Year Final Research Report

Compositional Inference Systems based on Formal Semantics and Natural Language Processing

Research Project

  • PDF
Project/Area Number 20K19868
Research Category

Grant-in-Aid for Early-Career Scientists

Allocation TypeMulti-year Fund
Review Section Basic Section 61030:Intelligent informatics-related
Research InstitutionThe University of Tokyo (2021-2023)
Institute of Physical and Chemical Research (2020)

Principal Investigator

Yanaka Hitomi  東京大学, 大学院情報理工学系研究科, 准教授 (10854581)

Project Period (FY) 2020-04-01 – 2024-03-31
Keywords自然言語推論 / 体系性 / 構成性 / 深層学習 / 自然言語処理 / 形式意味論 / 意味解析
Outline of Final Research Achievements

Deep learning models have been well studied in natural language processing. However, the extent to which models capture the compositional meaning of sentences is not clear, and their robustness to unseen data is uncertain. In this study, we investigate the extent to which models capture the compositional meaning of sentences, and develop inference systems that consider the compositional meaning of sentences.
Specifically, we constructed English and Japanese benchmarks to analyze whether models capture the compositional meaning of sentences based on the systematicity of inference, and identified the generalization capacity of current deep learning models. Furthermore, we developed inference systems that map sentences into their semantic representations based on compositional semantics and perform inference. Some of the results were accepted by top international conferences and journals, such as ACL and TACL. Our benchmarks and inference systems have been made available for research use.

Free Research Field

自然言語処理

Academic Significance and Societal Importance of the Research Achievements

本研究成果の中で特定された深層学習のモデルの文の意味における汎化性能の課題は、大規模言語モデルの信頼性に関わり、社会的影響が大きいものである。また、既存の自然言語推論ベンチマークの多くは英語であり、日本語の自然言語推論ベンチマークの枯渇は日本語の言語処理技術の発展に向けて深刻な問題である。本研究成果の一部である日本語の自然言語推論ベンチマークは、いずれも研究利用可能な形で公開しており、大規模言語モデルの基礎解析や日本語の言語処理技術の評価に貢献するものである。

URL: 

Published: 2025-01-30  

Information User Guide FAQ News Terms of Use Attribution of KAKENHI

Powered by NII kakenhi