Project/Area Number |
23K11049
|
Research Category |
Grant-in-Aid for Scientific Research (C)
|
Allocation Type | Multi-year Fund |
Section | 一般 |
Review Section |
Basic Section 60050:Software-related
|
Research Institution | The University of Tokyo |
Principal Investigator |
章 甫源 東京大学, 大学院情報理工学系研究科, 特任助教 (80965070)
|
Co-Investigator(Kenkyū-buntansha) |
趙 建軍 九州大学, システム情報科学研究院, 教授 (20299580)
|
Project Period (FY) |
2023-04-01 – 2026-03-31
|
Project Status |
Granted (Fiscal Year 2023)
|
Budget Amount *help |
¥4,680,000 (Direct Cost: ¥3,600,000、Indirect Cost: ¥1,080,000)
Fiscal Year 2025: ¥1,430,000 (Direct Cost: ¥1,100,000、Indirect Cost: ¥330,000)
Fiscal Year 2024: ¥1,690,000 (Direct Cost: ¥1,300,000、Indirect Cost: ¥390,000)
Fiscal Year 2023: ¥1,560,000 (Direct Cost: ¥1,200,000、Indirect Cost: ¥360,000)
|
Keywords | Adversarial Robustness / Deep Neural Networks |
Outline of Research at the Start |
Deep neural networks (DNNs) have achieved tremendous success in various areas. However, they are vulnerable to adversarial examples, which has posed severe security risks for DNNs. We propose to develop practical and scalable techniques for quality assurance of adversarial robustness of DNNs.
|
Outline of Annual Research Achievements |
We have completed one of the research topics outlined in our research proposal: developing property-driven feedback directed fuzzing of deep neural networks. We have developed a fuzzing-based blackbox adversarial attack for evaluating adversarial robustness of deep neural networks used for image classification. Our technique, DeepRover, outperforms state-of-the-art blackbox attacks in effectiveness, subtelty and query-efficiency. DeepRover is more effective and query-efficient in finding adversarial examples than state-of-the-art approaches. Moreover, DeepRover can generate adversarial examples more subtle than other approaches.
|
Current Status of Research Progress |
Current Status of Research Progress
2: Research has progressed on the whole more than it was originally planned.
Reason
One research topic in our research proposal is to develop property-driven fuzzing-based blackbox attacks for deep neural networks. The purpose of this topic is to develop techniques that can better evaluate the adversarial robustness of deep neural networks. We have successfully developed such an attack, DeepRover, meeting the research objectives of our project. DeepRover outperforms state-of-the-art approaches and provides a valuable method for estimating and understanding the robustness boundary of deep neural networks.
|
Strategy for Future Research Activity |
Our technique DeepRover has demonstrated the susceptibility of deep neural networks against adversarial perturbations. Broadly speaking, deep neural networks lack robustness against various types of common corruptions and perturbations, including natural noise, weather-related perturbations and image blurring. In the next step of our project, our focus will shift towards developing effective repairing techniques for deep neural networks. Our goal is to develop techniques that can enhance the robustness of neural networks against common corruptions and perturbations.
|