2021 Fiscal Year Annual Research Report
Neural Machine Translation Based on Bilingual Resources Extracted from Multimodal Data
Project/Area Number |
19K20343
|
Research Institution | Kyoto University |
Principal Investigator |
チョ シンキ 京都大学, 情報学研究科, 特定准教授 (70784891)
|
Project Period (FY) |
2019-04-01 – 2022-03-31
|
Keywords | 機械翻訳 / マルチモーダル |
Outline of Annual Research Achievements |
In FY 2020, we mainly studied the following to improve multimodal neural machine translation (NNMT). 1. MNMT with comparable sentences. We organized a shared task in the 8th Workshop on Asian Translation (WAT 2021) using the dataset we created in FY 2020. Our system achieved the best performance in this shared task. 2. MNMT with semantic image regions and word-region alignment. We finalized our work MNMT with semantic image regions and word-region alignment and published them in two famous international journals of Neurocomputing and TASLP. 3. Video-guided MT (VMT). We improved our VMT with a spatial hierarchical attention network as follows: We investigate and discuss spatial features; We research spatial feature filtering; We conduct experiments on the How2 dataset.
|
Research Products
(11 results)