研究実績の概要 |
Large language models (LLMs) often lack the ability to reason numerical and temporal knowledge such as how long an event lasts, how frequent it is, etc. Enhancing off-the-shelf LLMs' reasoning capability with large-scale weak supervision becomes a crucial topic. We relieved the reliance on human annotation and propose a bimodal voting strategy to obtain high-quality semi temporal knowledge. We re-train off-the-shelf LLMs on semi-supervision and observe significant improvement in temporal commonsense reasoning. We also explore a novel approach for identifying semantic relations (including temporal relations) between two events by revealing the labels of the most similar training examples. We have several papers accepted by the top AI conferences (EMNLP, EACL) and domestic conferences.
|