Publicly Offered Research
Grant-in-Aid for Scientific Research on Innovative Areas (Research in a proposed research area)
We design a series of empirical experiments to investigate the feasibility of exploiting temporal knowledge as supervision during pretraining. We plan to develop sufficent computing environments to accelarate training progress. We believe the temporal-aware representations will play an important role towards better understanding of time. Several extended topics can be explored, such as how human brain recognizes duration, frequency scales in languages.
Large language models (LLMs) often lack the ability to reason numerical and temporal knowledge such as how long an event lasts, how frequent it is, etc. Enhancing off-the-shelf LLMs' reasoning capability with large-scale weak supervision becomes a crucial topic. We relieved the reliance on human annotation and propose a bimodal voting strategy to obtain high-quality semi temporal knowledge. We re-train off-the-shelf LLMs on semi-supervision and observe significant improvement in temporal commonsense reasoning. We also explore a novel approach for identifying semantic relations (including temporal relations) between two events by revealing the labels of the most similar training examples. We have several papers accepted by the top AI conferences (EMNLP, EACL) and domestic conferences.
令和4年度が最終年度であるため、記入しない。
All 2023 2022 Other
All Int'l Joint Research (1 results) Presentation (12 results) (of which Int'l Joint Research: 5 results)