• 研究課題をさがす
  • 研究者をさがす
  • KAKENの使い方
  1. 課題ページに戻る

2021 年度 実績報告書

Temporal knowledge supervision for pre-training tranfer learning models

公募研究

研究領域時間生成学―時を生み出すこころの仕組み
研究課題/領域番号 21H00308
研究機関京都大学

研究代表者

程 飛  京都大学, 情報学研究科, 特定助教 (70801570)

研究期間 (年度) 2021-04-01 – 2023-03-31
キーワードtemporal reasoning / transfer learning / knowledge pre-training / contrastive learning
研究実績の概要

Reasoning over temporal knowledge relevant to the events mentioned in the text can help us understand when the events begin, how long they last, how frequent they are, etc. In this year:
We obtain significant progress on the research topic last year: event `duration' as the supervision for pre-training. Our paper is accepted by the 13th International Conference on Language Resources and Evaluation (LREC 2022).
We explore another research line of using distant supervion of existing human-annotated data for pre-training to improve the task of temporal relation extraction. The experimental results suggest our de-noise contrastive learning method outperforms state-of-the-art methods. We present it in the domestic conference NLP2022 and submit the furter to the international conference.

現在までの達成度 (区分)
現在までの達成度 (区分)

2: おおむね順調に進展している

理由

According to the plan, we started the project by doing a detailed survey of the related task definition of temporal information extraction, previous approaches, and existing corpora.
We bought several Graphics Processing Unit (GPU) servers for conducting a series of transfer learning pre-training experiments.
We manage to leverage event `duration' as pre-training supervision for improving the task of temporal commonsense question answering. Our method significantly outperforms other pre-training methods in various evaluation metrics.
The paper has been accepted by the international conference on Language Resources and Evaluation (LREC 2022).

今後の研究の推進方策

Although transfer learning models (ELMO, GPT, BERT, etc.) achieve remarkable impacts on the NLP community, temporal knowledge can hardly be learned from context explicitly.
In the last year, we managed to leverage existing human-annotated event `duration' knowledge as pre-training supervision to improve the temporal commonsense question answering task.
In this year, we plan to go further to explore the feasibility of extracting various temporal supervision ( including duration, date, frequency, etc.) from large-scale raw corpora for pre-training transfer learning models. We assume this research can be beneficial to broad downstream tasks.

  • 研究成果

    (3件)

すべて 2022

すべて 学会発表 (3件) (うち国際学会 2件)

  • [学会発表] Improving Event Duration Question Answering by Leveraging Existing Temporal Information Extraction Data2022

    • 著者名/発表者名
      Felix Giovanni Virgo, Fei Cheng, Sadao Kurohashi
    • 学会等名
      Proceedings of the 13th International Conference on Language Resources and Evaluation (LREC 2022), Marseille, France, (2022.6).
    • 国際学会
  • [学会発表] Attention is All you Need for Robust Temporal Reasoning2022

    • 著者名/発表者名
      Lis Kanashiro Pereira, Kevin Duh, Fei Cheng, Masayuki Asahara, Ichiro Kobayashi
    • 学会等名
      Proceedings of the 13th International Conference on Language Resources and Evaluation (LREC 2022), Marseille, France, (2022.6).
    • 国際学会
  • [学会発表] Improving Medical Relation Extraction with Distantly Supervised Pre-training2022

    • 著者名/発表者名
      Zhen Wan, Fei Cheng, Zhuoyuan Mao, Qianying Liu, Haiyue Song, Sadao Kurohashi
    • 学会等名
      言語処理学会 第28回年次大会, 浜松, (2022.3.14).

URL: 

公開日: 2022-12-28  

サービス概要 検索マニュアル よくある質問 お知らせ 利用規程 科研費による研究の帰属

Powered by NII kakenhi