• Search Research Projects
  • Search Researchers
  • How to Use
  1. Back to previous page

Creating Auxiliary Questions for Explainable Evaluation of Machine Reading Comprehension

Research Project

Project/Area Number 20K23335
Research Category

Grant-in-Aid for Research Activity Start-up

Allocation TypeMulti-year Fund
Review Section 1001:Information science, computer engineering, and related fields
Research InstitutionNational Institute of Informatics

Principal Investigator

Sugawara Saku  国立情報学研究所, コンテンツ科学研究系, 助教 (10855894)

Project Period (FY) 2020-09-11 – 2023-03-31
Project Status Completed (Fiscal Year 2022)
Budget Amount *help
¥2,860,000 (Direct Cost: ¥2,200,000、Indirect Cost: ¥660,000)
Fiscal Year 2021: ¥1,430,000 (Direct Cost: ¥1,100,000、Indirect Cost: ¥330,000)
Fiscal Year 2020: ¥1,430,000 (Direct Cost: ¥1,100,000、Indirect Cost: ¥330,000)
Keywords自然言語処理 / 計算言語学 / 自然言語理解 / 文章読解 / 言語理解 / 機械読解 / 質問応答
Outline of Research at the Start

言語理解を実現するシステムを着実に開発するには言語理解に関する精緻な分析と評価が必要であるが、既存のタスクでは「良い精度を出すシステムは実際に何が得意なのか」について十分な説明性が確保されていなかった。本研究は言語理解の評価のためのタスクである機械読解に焦点を当て、読解問題の回答に至るまでのプロセスを分解して補助的な問題として課すで詳細な評価を可能にするフレームワークの構築を目指す。

Outline of Final Research Achievements

Developing natural language understanding systems requires detailed analysis and evaluation of the language understanding process. However, existing tasks have not ensured sufficient accountability for systems' capabilities. This study focused on reading comprehension questions and constructed a new dataset that enables detailed evaluation by testing the understanding of the rationale in the question answering process. We used crowdsourcing to collect rationale texts for the correct and incorrect answers of existing multiple-choice reading comprehension questions, and then used the rationale information to create an auxiliary set of multiple-choice questions that help us to determine whether or not a system correctly answers the question, including the rationale in a consistent manner.

Academic Significance and Societal Importance of the Research Achievements

言語理解を実現するシステムの構築は自然言語処理における最大の目標のひとつである。システムを着実に開発するには言語理解に関する精緻な分析と評価が必要であり、本研究によって得られたデータセットは読解問題の回答に至るまでのプロセスを分解して補助的な問題として課すことで詳細な評価を可能にした。これにより現状のシステムの限界が示され、本データセットは今後の改善を促進する上で重要な役割を果たす。

Report

(4 results)
  • 2022 Annual Research Report   Final Research Report ( PDF )
  • 2021 Research-status Report
  • 2020 Research-status Report
  • Research Products

    (5 results)

All 2023 2021 Other

All Int'l Joint Research (1 results) Journal Article (4 results) (of which Int'l Joint Research: 1 results,  Open Access: 4 results,  Peer Reviewed: 3 results)

  • [Int'l Joint Research] University College London(英国)

    • Related Report
      2020 Research-status Report
  • [Journal Article] 読解問題における論理推論の一貫性評価2023

    • Author(s)
      川畑輝, 菅原朔
    • Journal Title

      言語処理学会 第29回年次大会 発表論文集

      Volume: - Pages: 2914-2919

    • Related Report
      2022 Annual Research Report
    • Open Access
  • [Journal Article] Improving the Robustness of QA Models to Challenge Sets with Variational Question-Answer Pair Generation2021

    • Author(s)
      Shinoda Kazutoshi、Sugawara Saku、Aizawa Akiko
    • Journal Title

      Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing: Student Research Workshop

      Volume: - Pages: 197-214

    • DOI

      10.18653/v1/2021.acl-srw.21

    • Related Report
      2021 Research-status Report
    • Peer Reviewed / Open Access
  • [Journal Article] Can Question Generation Debias Question Answering Models? A Case Study on Question-Context Lexical Overlap.2021

    • Author(s)
      Kazutoshi Shinoda, Saku Sugawara, Akiko Aizawa
    • Journal Title

      The 3rd Workshop on Machine Reading for Question Answering (MRQA), at the 2021 conference on Empirical Methods in Natural Language Processing (EMNLP)

      Volume: - Pages: 63-72

    • DOI

      10.18653/v1/2021.mrqa-1.6

    • Related Report
      2021 Research-status Report
    • Peer Reviewed / Open Access
  • [Journal Article] Benchmarking Machine Reading Comprehension: A Psychological Perspective2021

    • Author(s)
      Saku Sugawara, Pontus Stenetorp, Akiko Aizawa
    • Journal Title

      Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume

      Volume: 1 Pages: 1592-1612

    • Related Report
      2020 Research-status Report
    • Peer Reviewed / Open Access / Int'l Joint Research

URL: 

Published: 2020-09-29   Modified: 2024-01-30  

Information User Guide FAQ News Terms of Use Attribution of KAKENHI

Powered by NII kakenhi