Is "2019" Earlier than "2022"? Unsupervised Temporal Representation Learning via Contrastive Learning.
Project/Area Number |
23K16946
|
Research Category |
Grant-in-Aid for Early-Career Scientists
|
Allocation Type | Multi-year Fund |
Review Section |
Basic Section 61030:Intelligent informatics-related
|
Research Institution | Kyoto University |
Principal Investigator |
程 飛 京都大学, 情報学研究科, 特定助教 (70801570)
|
Project Period (FY) |
2023-04-01 – 2025-03-31
|
Project Status |
Granted (Fiscal Year 2023)
|
Budget Amount *help |
¥4,680,000 (Direct Cost: ¥3,600,000、Indirect Cost: ¥1,080,000)
Fiscal Year 2024: ¥2,340,000 (Direct Cost: ¥1,800,000、Indirect Cost: ¥540,000)
Fiscal Year 2023: ¥2,340,000 (Direct Cost: ¥1,800,000、Indirect Cost: ¥540,000)
|
Keywords | Temporal reasnoning / Large language model / Common sense reasoning / Numerical reasoning / Contrastive learning / 自然言語処理 / 大規模言語モデル / 常識推論 / 時間的常識推論 |
Outline of Research at the Start |
Our method leverages Contrastive Learning to explicitly encourage two time expressions with close numeric distance to have similar embeddings while pulling away the embedding space of two examples with far distance. We then empirically evaluate our models on several temporal-aware downstream tasks (e.g., TempEval-3: temporal information extraction, Clinical TempEval: timeline generation) to verify the improvement of possessing temporal information.
|
Outline of Annual Research Achievements |
My research aims to improve the temporal reasoning capabilitie (common sense + numerical reasoning) of large language models (LLMs). I mainly achieve two things in this year: (1) Inspired by Stanford University's Alpaca model , I proposed a method to distill Japanese knowledge from powerful GPT-4 to improve open LLMs’ capability of Japanese common sense reasoning. (2) Developing an QA benchmark that can assess LLMs’ reasoning capabilities across eight dimensions: common sense, mathematical operations, writing, etc. Our model leverages state-of-the-art GPT-4 as a judge to assess LLMs’ outputs.
|
Current Status of Research Progress |
Current Status of Research Progress
2: Research has progressed on the whole more than it was originally planned.
Reason
According to our plan, we spent the first year preparing the design of the model and conducting some preliminary experiments. Our idea is to leverage contrastive learning to train LLMs on corpora containing rich temporal and numerical expressions, to achieve better quality of temporal representations. The actual progress was quite smooth; based on the latest LLM research worldwide this year, we distill a small amount of high-quality instruction data (related to common sense, numerical reasoning) from powerful GPT-4. We then train open LLMs including OpenCalm, LLaMA1, and LLaMA2 on this data. All three models achieved improvements in reasoning performance, including common sense reasoning and numerical reasoning. The paper is accepted by the international conference LREC-COLING 2024.
|
Strategy for Future Research Activity |
In the latest experiment, we find that a small amount of data distilled from GPT-4 could significantly improve LLMs' capability of common sense and numerical reasoning. This leads us to consider whether we could drive GPT-4 to intentionally create pairs of high-quality and low-quality temporal textual data directly. These pairs could serve as positive and negative examples for our contrastive learning to optimize the representations of the latest open Japanese LLMs such as LLM-jp 13B, Swallow, etc. This approach would allow us to avoid handling large amounts of raw text and extracting contrastive learning targets with low relevance from raw corpora. Our goal is to directly optimize the temporal representations and also incorporate temporal reasoning into our QA benchmark.
|
Report
(1 results)
Research Products
(4 results)