• Search Research Projects
  • Search Researchers
  • How to Use
  1. Back to previous page

Property-Driven Quality Assurance of Adversarial Robustness of Deep Neural Networks

Research Project

Project/Area Number 23K11049
Research Category

Grant-in-Aid for Scientific Research (C)

Allocation TypeMulti-year Fund
Section一般
Review Section Basic Section 60050:Software-related
Research InstitutionThe University of Tokyo

Principal Investigator

章 甫源  東京大学, 大学院情報理工学系研究科, 特任助教 (80965070)

Co-Investigator(Kenkyū-buntansha) 趙 建軍  九州大学, システム情報科学研究院, 教授 (20299580)
Project Period (FY) 2023-04-01 – 2026-03-31
Project Status Granted (Fiscal Year 2023)
Budget Amount *help
¥4,680,000 (Direct Cost: ¥3,600,000、Indirect Cost: ¥1,080,000)
Fiscal Year 2025: ¥1,430,000 (Direct Cost: ¥1,100,000、Indirect Cost: ¥330,000)
Fiscal Year 2024: ¥1,690,000 (Direct Cost: ¥1,300,000、Indirect Cost: ¥390,000)
Fiscal Year 2023: ¥1,560,000 (Direct Cost: ¥1,200,000、Indirect Cost: ¥360,000)
KeywordsAdversarial Robustness / Deep Neural Networks
Outline of Research at the Start

Deep neural networks (DNNs) have achieved tremendous success in various areas. However, they are vulnerable to adversarial examples, which has posed severe security risks for DNNs. We propose to develop practical and scalable techniques for quality assurance of adversarial robustness of DNNs.

Outline of Annual Research Achievements

We have completed one of the research topics outlined in our research proposal: developing property-driven feedback directed fuzzing of deep neural networks. We have developed a fuzzing-based blackbox adversarial attack for evaluating adversarial robustness of deep neural networks used for image classification. Our technique, DeepRover, outperforms state-of-the-art blackbox attacks in effectiveness, subtelty and query-efficiency. DeepRover is more effective and query-efficient in finding adversarial examples than state-of-the-art approaches. Moreover, DeepRover can generate adversarial examples more subtle than other approaches.

Current Status of Research Progress
Current Status of Research Progress

2: Research has progressed on the whole more than it was originally planned.

Reason

One research topic in our research proposal is to develop property-driven fuzzing-based blackbox attacks for deep neural networks. The purpose of this topic is to develop techniques that can better evaluate the adversarial robustness of deep neural networks. We have successfully developed such an attack, DeepRover, meeting the research objectives of our project. DeepRover outperforms state-of-the-art approaches and provides a valuable method for estimating and understanding the robustness boundary of deep neural networks.

Strategy for Future Research Activity

Our technique DeepRover has demonstrated the susceptibility of deep neural networks against adversarial perturbations. Broadly speaking, deep neural networks lack robustness against various types of common corruptions and perturbations, including natural noise, weather-related perturbations and image blurring. In the next step of our project, our focus will shift towards developing effective repairing techniques for deep neural networks. Our goal is to develop techniques that can enhance the robustness of neural networks against common corruptions and perturbations.

Report

(1 results)
  • 2023 Research-status Report
  • Research Products

    (3 results)

All 2023

All Presentation (3 results) (of which Int'l Joint Research: 3 results)

  • [Presentation] DeepRover: A Query-Efficient Blackbox Attack for Deep Neural Networks2023

    • Author(s)
      Fuyuan Zhang, Xinwen Hu, Lei Ma, Jianjun Zhao
    • Organizer
      ESEC/FSE 2023: Proceedings of the 31st ACM Joint European Software Engineering Conference and Symposium on the Foundations of Software Engineering
    • Related Report
      2023 Research-status Report
    • Int'l Joint Research
  • [Presentation] QuraTest: Integrating Quantum Specific Features in Quantum Program Testing2023

    • Author(s)
      Jiaming Ye, Shangzhou Xia, Fuyuan Zhang, Paolo Arcaini, Lei Ma, Jianjun Zhao, Fuyuki Ishikawa
    • Organizer
      ASE 2023: Proceedings of the 38th IEEE/ACM International Conference on Automated Software Engineering
    • Related Report
      2023 Research-status Report
    • Int'l Joint Research
  • [Presentation] Generative Model-Based Testing on Decision-Making Policies2023

    • Author(s)
      Zhuo Li, Xiongfei Wu, Derui Zhu, Mingfei Cheng, Siyuan Chen, Fuyuan Zhang, Xiaofei Xie, Lei Ma, Jianjun Zhao
    • Organizer
      ASE 2023: Proceedings of the 38th IEEE/ACM International Conference on Automated Software Engineering
    • Related Report
      2023 Research-status Report
    • Int'l Joint Research

URL: 

Published: 2023-04-13   Modified: 2024-12-25  

Information User Guide FAQ News Terms of Use Attribution of KAKENHI

Powered by NII kakenhi