研究課題/領域番号 |
21K17802
|
研究機関 | 奈良先端科学技術大学院大学 |
研究代表者 |
|
研究期間 (年度) |
2021-04-01 – 2025-03-31
|
キーワード | adversarial training / nlp |
研究実績の概要 |
We have so far accomplished most of the proposed research questions from the initial proposal during the execution of the project. We have shown that applying perturbations to other layers of the network improves current adversarial training methods for natural language processing (NLP). Besides applying perturbations at the embedding level, and exploring applying perturbations to other layers of the model or a combination of layers and performing a comparison of these variations. Similarity, we have shown that multi-task learning also improves current adversarial training methods for NLP. We have also applied our models to Japanese NLP tasks and achieved similar improvements, showing that our models are language-agnostic.
|
現在までの達成度 (区分) |
現在までの達成度 (区分)
2: おおむね順調に進展している
理由
As for the remaining research question: using prior knowledge that can guide the algorithm to generate better perturbations, we have made advancements by performing several experiments and we have a draft under submission at the moment.
|
今後の研究の推進方策 |
As for this last year, we plan to gather all results obtained into a major publication, such as a journal. In addition, with the rapid progress and releases of language models, such as ChatGPT, we plan to also apply our proposed models to such language models, and verify if they can further improve the performance of such models. Current research has shown that even models such as ChatGPT are susceptible to adversarial attacks and can have their performance degraded by them. From these results, we plan to prepare and submit another draft to a major conference.
|
次年度使用額が生じた理由 |
In the 2023 fiscal year, the grant was mainly used to buy books and equipment.
In the 2024 fiscal year, i plan to use the remaining grant to attend major nlp conferences.
|