研究課題/領域番号 |
23K11049
|
研究種目 |
基盤研究(C)
|
配分区分 | 基金 |
応募区分 | 一般 |
審査区分 |
小区分60050:ソフトウェア関連
|
研究機関 | 東京大学 |
研究代表者 |
章 甫源 東京大学, 大学院情報理工学系研究科, 特任助教 (80965070)
|
研究分担者 |
趙 建軍 九州大学, システム情報科学研究院, 教授 (20299580)
|
研究期間 (年度) |
2023-04-01 – 2026-03-31
|
研究課題ステータス |
交付 (2023年度)
|
配分額 *注記 |
4,680千円 (直接経費: 3,600千円、間接経費: 1,080千円)
2025年度: 1,430千円 (直接経費: 1,100千円、間接経費: 330千円)
2024年度: 1,690千円 (直接経費: 1,300千円、間接経費: 390千円)
2023年度: 1,560千円 (直接経費: 1,200千円、間接経費: 360千円)
|
キーワード | Adversarial Robustness / Deep Neural Networks |
研究開始時の研究の概要 |
Deep neural networks (DNNs) have achieved tremendous success in various areas. However, they are vulnerable to adversarial examples, which has posed severe security risks for DNNs. We propose to develop practical and scalable techniques for quality assurance of adversarial robustness of DNNs.
|
研究実績の概要 |
We have completed one of the research topics outlined in our research proposal: developing property-driven feedback directed fuzzing of deep neural networks. We have developed a fuzzing-based blackbox adversarial attack for evaluating adversarial robustness of deep neural networks used for image classification. Our technique, DeepRover, outperforms state-of-the-art blackbox attacks in effectiveness, subtelty and query-efficiency. DeepRover is more effective and query-efficient in finding adversarial examples than state-of-the-art approaches. Moreover, DeepRover can generate adversarial examples more subtle than other approaches.
|
現在までの達成度 (区分) |
現在までの達成度 (区分)
2: おおむね順調に進展している
理由
One research topic in our research proposal is to develop property-driven fuzzing-based blackbox attacks for deep neural networks. The purpose of this topic is to develop techniques that can better evaluate the adversarial robustness of deep neural networks. We have successfully developed such an attack, DeepRover, meeting the research objectives of our project. DeepRover outperforms state-of-the-art approaches and provides a valuable method for estimating and understanding the robustness boundary of deep neural networks.
|
今後の研究の推進方策 |
Our technique DeepRover has demonstrated the susceptibility of deep neural networks against adversarial perturbations. Broadly speaking, deep neural networks lack robustness against various types of common corruptions and perturbations, including natural noise, weather-related perturbations and image blurring. In the next step of our project, our focus will shift towards developing effective repairing techniques for deep neural networks. Our goal is to develop techniques that can enhance the robustness of neural networks against common corruptions and perturbations.
|