• 研究課題をさがす
  • 研究者をさがす
  • KAKENの使い方
  1. 前のページに戻る

Is "2019" Earlier than "2022"? Unsupervised Temporal Representation Learning via Contrastive Learning.

研究課題

研究課題/領域番号 23K16946
研究種目

若手研究

配分区分基金
審査区分 小区分61030:知能情報学関連
研究機関京都大学

研究代表者

程 飛  京都大学, 情報学研究科, 特定助教 (70801570)

研究期間 (年度) 2023-04-01 – 2025-03-31
研究課題ステータス 交付 (2023年度)
配分額 *注記
4,680千円 (直接経費: 3,600千円、間接経費: 1,080千円)
2024年度: 2,340千円 (直接経費: 1,800千円、間接経費: 540千円)
2023年度: 2,340千円 (直接経費: 1,800千円、間接経費: 540千円)
キーワードTemporal reasnoning / Large language model / Common sense reasoning / Numerical reasoning / Contrastive learning / 自然言語処理 / 大規模言語モデル / 常識推論 / 時間的常識推論
研究開始時の研究の概要

Our method leverages Contrastive Learning to explicitly encourage two time expressions with close numeric distance to have similar embeddings while pulling away the embedding space of two examples with far distance. We then empirically evaluate our models on several temporal-aware downstream tasks (e.g., TempEval-3: temporal information extraction, Clinical TempEval: timeline generation) to verify the improvement of possessing temporal information.

研究実績の概要

My research aims to improve the temporal reasoning capabilitie (common sense + numerical reasoning) of large language models (LLMs). I mainly achieve two things in this year: (1) Inspired by Stanford University's Alpaca model , I proposed a method to distill Japanese knowledge from powerful GPT-4 to improve open LLMs’ capability of Japanese common sense reasoning. (2) Developing an QA benchmark that can assess LLMs’ reasoning capabilities across eight dimensions: common sense, mathematical operations, writing, etc. Our model leverages state-of-the-art GPT-4 as a judge to assess LLMs’ outputs.

現在までの達成度 (区分)
現在までの達成度 (区分)

2: おおむね順調に進展している

理由

According to our plan, we spent the first year preparing the design of the model and conducting some preliminary experiments. Our idea is to leverage contrastive learning to train LLMs on corpora containing rich temporal and numerical expressions, to achieve better quality of temporal representations. The actual progress was quite smooth; based on the latest LLM research worldwide this year, we distill a small amount of high-quality instruction data (related to common sense, numerical reasoning) from powerful GPT-4. We then train open LLMs including OpenCalm, LLaMA1, and LLaMA2 on this data. All three models achieved improvements in reasoning performance, including common sense reasoning and numerical reasoning. The paper is accepted by the international conference LREC-COLING 2024.

今後の研究の推進方策

In the latest experiment, we find that a small amount of data distilled from GPT-4 could significantly improve LLMs' capability of common sense and numerical reasoning. This leads us to consider whether we could drive GPT-4 to intentionally create pairs of high-quality and low-quality temporal textual data directly. These pairs could serve as positive and negative examples for our contrastive learning to optimize the representations of the latest open Japanese LLMs such as LLM-jp 13B, Swallow, etc. This approach would allow us to avoid handling large amounts of raw text and extracting contrastive learning targets with low relevance from raw corpora. Our goal is to directly optimize the temporal representations and also incorporate temporal reasoning into our QA benchmark.

報告書

(1件)
  • 2023 実施状況報告書
  • 研究成果

    (4件)

すべて 2024 2023 その他

すべて 国際共同研究 (1件) 雑誌論文 (2件) (うち国際共著 1件、 査読あり 2件、 オープンアクセス 1件) 備考 (1件)

  • [国際共同研究] Peking University/Xiaomi AI Lab(中国)

    • 関連する報告書
      2023 実施状況報告書
  • [雑誌論文] Rapidly Developing High-quality Instruction Data and Evaluation Benchmark for Large Language Models with Minimal Human Effort: A Case Study on Japanese2024

    • 著者名/発表者名
      Yikun Sun, Zhen Wan, Nobuhiro Ueda, Sakiko Yahata, Fei Cheng, Chenhui Chu, Sadao Kurohashi
    • 雑誌名

      Proceedings of The 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (COLING-LREC 2024)

      巻: v1 ページ: 0-0

    • 関連する報告書
      2023 実施状況報告書
    • 査読あり
  • [雑誌論文] ComSearch: Equation Searching with Combinatorial Strategy for Solving Math Word Problems with Weak Supervision2023

    • 著者名/発表者名
      Qianying Liu, Wenyu Guan, Jianhao Shen, Fei Cheng, Sadao Kurohashi
    • 雑誌名

      Proceedings of the 17th Conference of the European Chapter of the Association for Computational Linguistics (EACL 2023)

      巻: v1 ページ: 2549-2562

    • DOI

      10.18653/v1/2023.eacl-main.186

    • 関連する報告書
      2023 実施状況報告書
    • 査読あり / オープンアクセス / 国際共著
  • [備考] QA Benchmark for evaluating Japanese LLMs

    • URL

      https://github.com/ku-nlp/ja-vicuna-qa-benchmark

    • 関連する報告書
      2023 実施状況報告書

URL: 

公開日: 2023-04-13   更新日: 2024-12-25  

サービス概要 検索マニュアル よくある質問 お知らせ 利用規程 科研費による研究の帰属

Powered by NII kakenhi