• 研究課題をさがす
  • 研究者をさがす
  • KAKENの使い方
  1. 前のページに戻る

Property-Driven Quality Assurance of Adversarial Robustness of Deep Neural Networks

研究課題

研究課題/領域番号 23K11049
研究種目

基盤研究(C)

配分区分基金
応募区分一般
審査区分 小区分60050:ソフトウェア関連
研究機関東京大学

研究代表者

章 甫源  東京大学, 大学院情報理工学系研究科, 特任助教 (80965070)

研究分担者 趙 建軍  九州大学, システム情報科学研究院, 教授 (20299580)
研究期間 (年度) 2023-04-01 – 2026-03-31
研究課題ステータス 交付 (2023年度)
配分額 *注記
4,680千円 (直接経費: 3,600千円、間接経費: 1,080千円)
2025年度: 1,430千円 (直接経費: 1,100千円、間接経費: 330千円)
2024年度: 1,690千円 (直接経費: 1,300千円、間接経費: 390千円)
2023年度: 1,560千円 (直接経費: 1,200千円、間接経費: 360千円)
キーワードAdversarial Robustness / Deep Neural Networks
研究開始時の研究の概要

Deep neural networks (DNNs) have achieved tremendous success in various areas. However, they are vulnerable to adversarial examples, which has posed severe security risks for DNNs. We propose to develop practical and scalable techniques for quality assurance of adversarial robustness of DNNs.

研究実績の概要

We have completed one of the research topics outlined in our research proposal: developing property-driven feedback directed fuzzing of deep neural networks. We have developed a fuzzing-based blackbox adversarial attack for evaluating adversarial robustness of deep neural networks used for image classification. Our technique, DeepRover, outperforms state-of-the-art blackbox attacks in effectiveness, subtelty and query-efficiency. DeepRover is more effective and query-efficient in finding adversarial examples than state-of-the-art approaches. Moreover, DeepRover can generate adversarial examples more subtle than other approaches.

現在までの達成度 (区分)
現在までの達成度 (区分)

2: おおむね順調に進展している

理由

One research topic in our research proposal is to develop property-driven fuzzing-based blackbox attacks for deep neural networks. The purpose of this topic is to develop techniques that can better evaluate the adversarial robustness of deep neural networks. We have successfully developed such an attack, DeepRover, meeting the research objectives of our project. DeepRover outperforms state-of-the-art approaches and provides a valuable method for estimating and understanding the robustness boundary of deep neural networks.

今後の研究の推進方策

Our technique DeepRover has demonstrated the susceptibility of deep neural networks against adversarial perturbations. Broadly speaking, deep neural networks lack robustness against various types of common corruptions and perturbations, including natural noise, weather-related perturbations and image blurring. In the next step of our project, our focus will shift towards developing effective repairing techniques for deep neural networks. Our goal is to develop techniques that can enhance the robustness of neural networks against common corruptions and perturbations.

報告書

(1件)
  • 2023 実施状況報告書
  • 研究成果

    (3件)

すべて 2023

すべて 学会発表 (3件) (うち国際学会 3件)

  • [学会発表] DeepRover: A Query-Efficient Blackbox Attack for Deep Neural Networks2023

    • 著者名/発表者名
      Fuyuan Zhang, Xinwen Hu, Lei Ma, Jianjun Zhao
    • 学会等名
      ESEC/FSE 2023: Proceedings of the 31st ACM Joint European Software Engineering Conference and Symposium on the Foundations of Software Engineering
    • 関連する報告書
      2023 実施状況報告書
    • 国際学会
  • [学会発表] QuraTest: Integrating Quantum Specific Features in Quantum Program Testing2023

    • 著者名/発表者名
      Jiaming Ye, Shangzhou Xia, Fuyuan Zhang, Paolo Arcaini, Lei Ma, Jianjun Zhao, Fuyuki Ishikawa
    • 学会等名
      ASE 2023: Proceedings of the 38th IEEE/ACM International Conference on Automated Software Engineering
    • 関連する報告書
      2023 実施状況報告書
    • 国際学会
  • [学会発表] Generative Model-Based Testing on Decision-Making Policies2023

    • 著者名/発表者名
      Zhuo Li, Xiongfei Wu, Derui Zhu, Mingfei Cheng, Siyuan Chen, Fuyuan Zhang, Xiaofei Xie, Lei Ma, Jianjun Zhao
    • 学会等名
      ASE 2023: Proceedings of the 38th IEEE/ACM International Conference on Automated Software Engineering
    • 関連する報告書
      2023 実施状況報告書
    • 国際学会

URL: 

公開日: 2023-04-13   更新日: 2024-12-25  

サービス概要 検索マニュアル よくある質問 お知らせ 利用規程 科研費による研究の帰属

Powered by NII kakenhi