• Search Research Projects
  • Search Researchers
  • How to Use
  1. Back to previous page

Is "2019" Earlier than "2022"? Unsupervised Temporal Representation Learning via Contrastive Learning.

Research Project

Project/Area Number 23K16946
Research Category

Grant-in-Aid for Early-Career Scientists

Allocation TypeMulti-year Fund
Review Section Basic Section 61030:Intelligent informatics-related
Research InstitutionKyoto University

Principal Investigator

程 飛  京都大学, 情報学研究科, 特定助教 (70801570)

Project Period (FY) 2023-04-01 – 2025-03-31
Project Status Granted (Fiscal Year 2023)
Budget Amount *help
¥4,680,000 (Direct Cost: ¥3,600,000、Indirect Cost: ¥1,080,000)
Fiscal Year 2024: ¥2,340,000 (Direct Cost: ¥1,800,000、Indirect Cost: ¥540,000)
Fiscal Year 2023: ¥2,340,000 (Direct Cost: ¥1,800,000、Indirect Cost: ¥540,000)
KeywordsTemporal reasnoning / Large language model / Common sense reasoning / Numerical reasoning / Contrastive learning / 自然言語処理 / 大規模言語モデル / 常識推論 / 時間的常識推論
Outline of Research at the Start

Our method leverages Contrastive Learning to explicitly encourage two time expressions with close numeric distance to have similar embeddings while pulling away the embedding space of two examples with far distance. We then empirically evaluate our models on several temporal-aware downstream tasks (e.g., TempEval-3: temporal information extraction, Clinical TempEval: timeline generation) to verify the improvement of possessing temporal information.

Outline of Annual Research Achievements

My research aims to improve the temporal reasoning capabilitie (common sense + numerical reasoning) of large language models (LLMs). I mainly achieve two things in this year: (1) Inspired by Stanford University's Alpaca model , I proposed a method to distill Japanese knowledge from powerful GPT-4 to improve open LLMs’ capability of Japanese common sense reasoning. (2) Developing an QA benchmark that can assess LLMs’ reasoning capabilities across eight dimensions: common sense, mathematical operations, writing, etc. Our model leverages state-of-the-art GPT-4 as a judge to assess LLMs’ outputs.

Current Status of Research Progress
Current Status of Research Progress

2: Research has progressed on the whole more than it was originally planned.

Reason

According to our plan, we spent the first year preparing the design of the model and conducting some preliminary experiments. Our idea is to leverage contrastive learning to train LLMs on corpora containing rich temporal and numerical expressions, to achieve better quality of temporal representations. The actual progress was quite smooth; based on the latest LLM research worldwide this year, we distill a small amount of high-quality instruction data (related to common sense, numerical reasoning) from powerful GPT-4. We then train open LLMs including OpenCalm, LLaMA1, and LLaMA2 on this data. All three models achieved improvements in reasoning performance, including common sense reasoning and numerical reasoning. The paper is accepted by the international conference LREC-COLING 2024.

Strategy for Future Research Activity

In the latest experiment, we find that a small amount of data distilled from GPT-4 could significantly improve LLMs' capability of common sense and numerical reasoning. This leads us to consider whether we could drive GPT-4 to intentionally create pairs of high-quality and low-quality temporal textual data directly. These pairs could serve as positive and negative examples for our contrastive learning to optimize the representations of the latest open Japanese LLMs such as LLM-jp 13B, Swallow, etc. This approach would allow us to avoid handling large amounts of raw text and extracting contrastive learning targets with low relevance from raw corpora. Our goal is to directly optimize the temporal representations and also incorporate temporal reasoning into our QA benchmark.

Report

(1 results)
  • 2023 Research-status Report
  • Research Products

    (4 results)

All 2024 2023 Other

All Int'l Joint Research (1 results) Journal Article (2 results) (of which Int'l Joint Research: 1 results,  Peer Reviewed: 2 results,  Open Access: 1 results) Remarks (1 results)

  • [Int'l Joint Research] Peking University/Xiaomi AI Lab(中国)

    • Related Report
      2023 Research-status Report
  • [Journal Article] Rapidly Developing High-quality Instruction Data and Evaluation Benchmark for Large Language Models with Minimal Human Effort: A Case Study on Japanese2024

    • Author(s)
      Yikun Sun, Zhen Wan, Nobuhiro Ueda, Sakiko Yahata, Fei Cheng, Chenhui Chu, Sadao Kurohashi
    • Journal Title

      Proceedings of The 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (COLING-LREC 2024)

      Volume: v1 Pages: 0-0

    • Related Report
      2023 Research-status Report
    • Peer Reviewed
  • [Journal Article] ComSearch: Equation Searching with Combinatorial Strategy for Solving Math Word Problems with Weak Supervision2023

    • Author(s)
      Qianying Liu, Wenyu Guan, Jianhao Shen, Fei Cheng, Sadao Kurohashi
    • Journal Title

      Proceedings of the 17th Conference of the European Chapter of the Association for Computational Linguistics (EACL 2023)

      Volume: v1 Pages: 2549-2562

    • DOI

      10.18653/v1/2023.eacl-main.186

    • Related Report
      2023 Research-status Report
    • Peer Reviewed / Open Access / Int'l Joint Research
  • [Remarks] QA Benchmark for evaluating Japanese LLMs

    • URL

      https://github.com/ku-nlp/ja-vicuna-qa-benchmark

    • Related Report
      2023 Research-status Report

URL: 

Published: 2023-04-13   Modified: 2024-12-25  

Information User Guide FAQ News Terms of Use Attribution of KAKENHI

Powered by NII kakenhi