Publicly Offered Research
Grant-in-Aid for Scientific Research on Innovative Areas (Research in a proposed research area)
Temporal relation classification (TRC) the core task in Temporal Information Extraction (TIE), which aims to identify a temporal order (before, after, simultaneous, during, etc.) between two entities (event, time expression or Document Creation Time, DCT). TRC has great potential to create many practical applications, such as extracting event timeline from news articles. We aim to explore deep neural networks for the TRC task from three different aspect: 1) neural network structure 2) task setting 3) multilingual corpora.
According to our research plan, we developed neural network models to improve state-of-the-art temporal information extraction (TRC) performance. We explored three valid research directions:1) Neural network classification models can be improved in several aspects. Deeper network layers, attention mechanism and integrating more features are proved to be effective in many tasks. We first introduced latest deep layer transfer learning models (i.e. BERT) into the TRC tasks. The empirical results showed the significant improvements over the existing shallow neural models.2) Existing research of TRC treats E-E, E-T, and E-D as three separate TLINK classification targets. We experimented training all three TLINK types in a multi-task learning scenario, which effectively exploy the use of all the data and significantly improve the baseline.3) ‘Multilingual’ is an important topic in NLP. Many high-quality Japanese TRC corpora have been constructed in recent years, such as BCCWJ-Timebank. We performed the additional experiments to evaluate the feasibility of our models for the Japanese BCCWJ-Timebank. Our model outperforms the existing SOTA models by a large margin.In summary, we manage to develop the SOTA performance TRC models. The paper is accepted by the top international conference for natural language processing: EMNLP (findings volume). We also explored 'end-to-end information extraction' and 'temporal question-answering' tasks, which is helpful to understanding the TRC task from different aspects. These two papers are accepted by EMNLP-findings and ACL-RepL4NLP.
令和2年度が最終年度であるため、記入しない。
All 2020 2019 Other
All Journal Article (3 results) (of which Int'l Joint Research: 3 results, Peer Reviewed: 3 results) Presentation (2 results) Remarks (2 results)
Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing
Volume: Findings volume Pages: 1352-1357
10.18653/v1/2020.findings-emnlp.121
Proceedings of the 5th Workshop on Representation Learning for NLP
Volume: ACL-RepL4NLP Pages: 55-60
10.18653/v1/2020.repl4nlp-1.8
Volume: Findings volume Pages: 236-246
10.18653/v1/2020.findings-emnlp.23
https://github.com/racerandom/NeuralTime
https://github.com/WindChimeRan/OpenJERE