研究領域 | 時間生成学―時を生み出すこころの仕組み |
研究課題/領域番号 |
21H00308
|
研究種目 |
新学術領域研究(研究領域提案型)
|
配分区分 | 補助金 |
審査区分 |
複合領域
|
研究機関 | 京都大学 |
研究代表者 |
程 飛 京都大学, 情報学研究科, 特定助教 (70801570)
|
研究期間 (年度) |
2021-04-01 – 2023-03-31
|
研究課題ステータス |
完了 (2022年度)
|
配分額 *注記 |
5,200千円 (直接経費: 4,000千円、間接経費: 1,200千円)
2022年度: 2,600千円 (直接経費: 2,000千円、間接経費: 600千円)
2021年度: 2,600千円 (直接経費: 2,000千円、間接経費: 600千円)
|
キーワード | Temporal Reasoning / Commonsense Reasoning / Large Language Model / Weak Supervision / temporal knowledge / deep neural networks / transfer learning / temporal reasoning / knowledge pre-training / contrastive learning / Temporal Knowledge / Natural Language / Neural Networks / Deep Learning |
研究開始時の研究の概要 |
We design a series of empirical experiments to investigate the feasibility of exploiting temporal knowledge as supervision during pretraining. We plan to develop sufficent computing environments to accelarate training progress. We believe the temporal-aware representations will play an important role towards better understanding of time. Several extended topics can be explored, such as how human brain recognizes duration, frequency scales in languages.
|
研究実績の概要 |
Large language models (LLMs) often lack the ability to reason numerical and temporal knowledge such as how long an event lasts, how frequent it is, etc. Enhancing off-the-shelf LLMs' reasoning capability with large-scale weak supervision becomes a crucial topic. We relieved the reliance on human annotation and propose a bimodal voting strategy to obtain high-quality semi temporal knowledge. We re-train off-the-shelf LLMs on semi-supervision and observe significant improvement in temporal commonsense reasoning. We also explore a novel approach for identifying semantic relations (including temporal relations) between two events by revealing the labels of the most similar training examples. We have several papers accepted by the top AI conferences (EMNLP, EACL) and domestic conferences.
|
現在までの達成度 (段落) |
令和4年度が最終年度であるため、記入しない。
|
今後の研究の推進方策 |
令和4年度が最終年度であるため、記入しない。
|