• Search Research Projects
  • Search Researchers
  • How to Use
  1. Back to previous page

Temporal knowledge supervision for pre-training tranfer learning models

Publicly Offered Research

Project AreaChronogenesis: how the mind generates time
Project/Area Number 21H00308
Research Category

Grant-in-Aid for Scientific Research on Innovative Areas (Research in a proposed research area)

Allocation TypeSingle-year Grants
Review Section Complex systems
Research InstitutionKyoto University

Principal Investigator

程 飛  京都大学, 情報学研究科, 特定助教 (70801570)

Project Period (FY) 2021-04-01 – 2023-03-31
Project Status Completed (Fiscal Year 2022)
Budget Amount *help
¥5,200,000 (Direct Cost: ¥4,000,000、Indirect Cost: ¥1,200,000)
Fiscal Year 2022: ¥2,600,000 (Direct Cost: ¥2,000,000、Indirect Cost: ¥600,000)
Fiscal Year 2021: ¥2,600,000 (Direct Cost: ¥2,000,000、Indirect Cost: ¥600,000)
KeywordsTemporal Reasoning / Commonsense Reasoning / Large Language Model / Weak Supervision / temporal knowledge / deep neural networks / transfer learning / temporal reasoning / knowledge pre-training / contrastive learning / Temporal Knowledge / Natural Language / Neural Networks / Deep Learning
Outline of Research at the Start

We design a series of empirical experiments to investigate the feasibility of exploiting temporal knowledge as supervision during pretraining. We plan to develop sufficent computing environments to accelarate training progress. We believe the temporal-aware representations will play an important role towards better understanding of time. Several extended topics can be explored, such as how human brain recognizes duration, frequency scales in languages.

Outline of Annual Research Achievements

Large language models (LLMs) often lack the ability to reason numerical and temporal knowledge such as how long an event lasts, how frequent it is, etc. Enhancing off-the-shelf LLMs' reasoning capability with large-scale weak supervision becomes a crucial topic. We relieved the reliance on human annotation and propose a bimodal voting strategy to obtain high-quality semi temporal knowledge. We re-train off-the-shelf LLMs on semi-supervision and observe significant improvement in temporal commonsense reasoning. We also explore a novel approach for identifying semantic relations (including temporal relations) between two events by revealing the labels of the most similar training examples. We have several papers accepted by the top AI conferences (EMNLP, EACL) and domestic conferences.

Research Progress Status

令和4年度が最終年度であるため、記入しない。

Strategy for Future Research Activity

令和4年度が最終年度であるため、記入しない。

Report

(2 results)
  • 2022 Annual Research Report
  • 2021 Annual Research Report
  • Research Products

    (13 results)

All 2023 2022 Other

All Int'l Joint Research (1 results) Presentation (12 results) (of which Int'l Joint Research: 5 results)

  • [Int'l Joint Research] Zhejiang University(中国)

    • Related Report
      2022 Annual Research Report
  • [Presentation] Relation Extraction with Weighted Contrastive Pre-training on Distant Supervision2023

    • Author(s)
      Zhen Wan, Fei Cheng, Qianying Liu, Zhuoyuan Mao, Haiyue Song, Sadao Kurohashi
    • Organizer
      Proceedings of the 17th Conference of the European Chapter of the Association for Computational Linguistics (EACL 2023). Findings Volume
    • Related Report
      2022 Annual Research Report
    • Int'l Joint Research
  • [Presentation] 大規模言語モデルに基づく複数の外部ツールを利用した推論フレームワーク2023

    • Author(s)
      稲葉達郎, 清丸寛一, Fei Cheng, 黒橋禎夫
    • Organizer
      言語処理学会第29回年次大会 (優秀賞, 11/579件)
    • Related Report
      2022 Annual Research Report
  • [Presentation] 時間関係タスクを対象にしたマルチタスク学習におけるデータの親和性の解析2023

    • Author(s)
      木村麻友子, Lis Kanashiro Pereira, 浅原正幸, Fei Cheng, 越智綾子, 小林一郎
    • Organizer
      言語処理学会第29回年次大会
    • Related Report
      2022 Annual Research Report
  • [Presentation] 日本語の時間的常識を理解する言語モデルの構築を目的としたマルチタスク学習における検証2023

    • Author(s)
      船曳日佳里, Lis Kanashiro Pereira, 木村麻友子 , 浅原正幸, Fei Cheng, 越智綾子, 小林一郎
    • Organizer
      言語処理学会第29回年次大会
    • Related Report
      2022 Annual Research Report
  • [Presentation] マルチタスク学習を用いた時間を認識する汎用言語モデルの構築2023

    • Author(s)
      船曳日佳里, Lis Kanashiro Pereira, 木村麻友子 , 浅原正幸, Fei Cheng, 越智綾子, 小林一郎
    • Organizer
      2023年度人工知能学会全国大会(第37回)
    • Related Report
      2022 Annual Research Report
  • [Presentation] ComSearch: Equation Searching with Combinatorial Strategy for Solving Math Word Problems with Weak Supervision2023

    • Author(s)
      Qianying Liu, Wenyu Guan, Jianhao Shen, Fei Cheng, Sadao Kurohashi
    • Organizer
      Proceedings of the 17th Conference of the European Chapter of the Association for Computational Linguistics (EACL 2023)
    • Related Report
      2022 Annual Research Report
  • [Presentation] Improving Event Duration Question Answering by Leveraging Existing Temporal Information Extraction Data2022

    • Author(s)
      Felix Giovanni Virgo, Fei Cheng, Sadao Kurohashi
    • Organizer
      Proceedings of the 13th International Conference on Language Resources and Evaluation (LREC 2022)
    • Related Report
      2022 Annual Research Report
    • Int'l Joint Research
  • [Presentation] Rescue Implicit and Long-tail Cases: Nearest Neighbor Relation Extraction2022

    • Author(s)
      Zhen Wan, Qianying Liu, Zhuoyuan Mao, Fei Cheng, Sadao Kurohashi, Jiwei Li
    • Organizer
      Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing (EMNLP 2022)
    • Related Report
      2022 Annual Research Report
    • Int'l Joint Research
  • [Presentation] 時間的常識理解へ向けた言語モデル構築への取り組み2022

    • Author(s)
      木村 麻友子, KANASHIRO Pereira Lis, 浅原 正幸, CHENG Fei, 越智 綾子, 小林 一郎
    • Organizer
      2022年度人工知能学会全国大会(第36回)
    • Related Report
      2022 Annual Research Report
  • [Presentation] Improving Event Duration Question Answering by Leveraging Existing Temporal Information Extraction Data2022

    • Author(s)
      Felix Giovanni Virgo, Fei Cheng, Sadao Kurohashi
    • Organizer
      Proceedings of the 13th International Conference on Language Resources and Evaluation (LREC 2022), Marseille, France, (2022.6).
    • Related Report
      2021 Annual Research Report
    • Int'l Joint Research
  • [Presentation] Attention is All you Need for Robust Temporal Reasoning2022

    • Author(s)
      Lis Kanashiro Pereira, Kevin Duh, Fei Cheng, Masayuki Asahara, Ichiro Kobayashi
    • Organizer
      Proceedings of the 13th International Conference on Language Resources and Evaluation (LREC 2022), Marseille, France, (2022.6).
    • Related Report
      2021 Annual Research Report
    • Int'l Joint Research
  • [Presentation] Improving Medical Relation Extraction with Distantly Supervised Pre-training2022

    • Author(s)
      Zhen Wan, Fei Cheng, Zhuoyuan Mao, Qianying Liu, Haiyue Song, Sadao Kurohashi
    • Organizer
      言語処理学会 第28回年次大会, 浜松, (2022.3.14).
    • Related Report
      2021 Annual Research Report

URL: 

Published: 2021-04-28   Modified: 2023-12-25  

Information User Guide FAQ News Terms of Use Attribution of KAKENHI

Powered by NII kakenhi