2023 Fiscal Year Research-status Report
Property-Driven Quality Assurance of Adversarial Robustness of Deep Neural Networks
Project/Area Number |
23K11049
|
Research Institution | The University of Tokyo |
Principal Investigator |
章 甫源 東京大学, 大学院情報理工学系研究科, 特任助教 (80965070)
|
Co-Investigator(Kenkyū-buntansha) |
趙 建軍 九州大学, システム情報科学研究院, 教授 (20299580) [Withdrawn]
|
Project Period (FY) |
2023-04-01 – 2026-03-31
|
Keywords | Adversarial Robustness / Deep Neural Networks |
Outline of Annual Research Achievements |
We have completed one of the research topics outlined in our research proposal: developing property-driven feedback directed fuzzing of deep neural networks. We have developed a fuzzing-based blackbox adversarial attack for evaluating adversarial robustness of deep neural networks used for image classification. Our technique, DeepRover, outperforms state-of-the-art blackbox attacks in effectiveness, subtelty and query-efficiency. DeepRover is more effective and query-efficient in finding adversarial examples than state-of-the-art approaches. Moreover, DeepRover can generate adversarial examples more subtle than other approaches.
|
Current Status of Research Progress |
Current Status of Research Progress
2: Research has progressed on the whole more than it was originally planned.
Reason
One research topic in our research proposal is to develop property-driven fuzzing-based blackbox attacks for deep neural networks. The purpose of this topic is to develop techniques that can better evaluate the adversarial robustness of deep neural networks. We have successfully developed such an attack, DeepRover, meeting the research objectives of our project. DeepRover outperforms state-of-the-art approaches and provides a valuable method for estimating and understanding the robustness boundary of deep neural networks.
|
Strategy for Future Research Activity |
Our technique DeepRover has demonstrated the susceptibility of deep neural networks against adversarial perturbations. Broadly speaking, deep neural networks lack robustness against various types of common corruptions and perturbations, including natural noise, weather-related perturbations and image blurring. In the next step of our project, our focus will shift towards developing effective repairing techniques for deep neural networks. Our goal is to develop techniques that can enhance the robustness of neural networks against common corruptions and perturbations.
|