• Search Research Projects
  • Search Researchers
  • How to Use
  1. Back to project page

2021 Fiscal Year Research-status Report

Prevention from Automated Analysis Services with Object-Level Adversarial Examples

Research Project

Project/Area Number 21K18023
Research InstitutionNational Institute of Informatics

Principal Investigator

レ チュンギア  国立情報学研究所, 情報社会相関研究系, 特任研究員 (00884404)

Project Period (FY) 2021-04-01 – 2023-03-31
KeywordsAdversarial examples / Privacy protection
Outline of Annual Research Achievements

We proposed an adversarial example based method for attacking human instance segmentation networks. We developed a novel method to automatically identify attackable regions in the target image to minimize the effect on image quality. The fashion-guided synthesized adversarial textures are inconspicuous and appear natural to the human eye. The effectiveness of the proposed method is enhanced by robustness training and by jointly attacking multiple components of the
target network. This work was published at Workshop on Media Forensics, CVPR 2021.

We analyzed class-aware transferability of adversarial examples to show the strong connection between non-targeted transferability of adversarial examples and same mistakes. Adversarial examples can have non-robust features that correlate with a certain class to which models can be misled. However, different mistakes occur between very similar models regardless of the perturbation size, which raises the question how adversarial examples cause different mistakes. We also demonstrated that non-robust features can comprehensively explain the difference between a different mistake and a same mistake by extending the framework of Ilyas et al. They showed that adversarial examples can have non-robust features that are predictive but human-imperceptible, which can cause a same mistake. In contrast, we showed that when the manipulated non-robust features in an adversarial examples are differently used by multiple models, those models may classify the adversarial examples differently. This work is submitted to ACM MM 2022 and under review.

Current Status of Research Progress
Current Status of Research Progress

3: Progress in research has been slightly delayed.

Reason

We published a paper about adversarial attack to person segmentation at Workshop on Media Forensics, CVPR 2021. Another paper about adversarial analysis was submitted to ACM MM 2022 and under review. We are currently developing a new adversarial attack method targeting scene recognition systems.

Strategy for Future Research Activity

We are developing a new adversarial attack method targeting scene recognition systems. We expect that the proposed method can be transferable and invisible. We will also develop new attack methods targeting vision-language systems in the next fiscal year.

Causes of Carryover

Because of COVID pandemic, conferences have been held as virtual meetings. Therefore, I would like to transfer the budget to the next fiscal year. The next fiscal year's budget and transferred one will be mainly used for equipment, experiments, and conference fees.

  • Research Products

    (5 results)

All 2022 2021

All Journal Article (5 results) (of which Int'l Joint Research: 5 results,  Peer Reviewed: 5 results)

  • [Journal Article] Trung-Nghia Le, Tam V. Nguyen, Minh-Triet Tran2022

    • Author(s)
      Contextual Guided Segmentation Framework for Semi-supervised Video Instance Segmentation
    • Journal Title

      Machine Vision and Applications

      Volume: 33 Pages: 1-19

    • DOI

      10.1007/s00138-022-01278-x

    • Peer Reviewed / Int'l Joint Research
  • [Journal Article] Fashion-Guided Adversarial Attack on Person Segmentation2021

    • Author(s)
      Marc Treu, Trung-Nghia Le, Huy H. Nguyen, Junichi Yamagishi, Isao Echizen
    • Journal Title

      IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops

      Volume: 1 Pages: 943-952

    • DOI

      10.1109/CVPRW53098.2021.00105

    • Peer Reviewed / Int'l Joint Research
  • [Journal Article] OpenForensics: Large-Scale Challenging Dataset For Multi-Face Forgery Detection And Segmentation In-The-Wild2021

    • Author(s)
      Trung-Nghia Le, Huy H. Nguyen, Junichi Yamagishi, Isao Echizen
    • Journal Title

      IEEE/CVF International Conference on Computer Vision

      Volume: 1 Pages: 10117-10127

    • DOI

      10.1109/ICCV48922.2021.00996

    • Peer Reviewed / Int'l Joint Research
  • [Journal Article] Khanh-Duy Nguyen, Huy H. Nguyen, Trung-Nghia Le, Junichi Yamagishi, Isao Echizen2021

    • Author(s)
      Effectiveness of Detection-based and Regression-based Approaches for Estimating Mask-Wearing Ratio
    • Journal Title

      IEEE International Conference on Automatic Face and Gesture Recognition Workshops

      Volume: 1 Pages: 1-8

    • DOI

      10.1109/FG52635.2021.9667046

    • Peer Reviewed / Int'l Joint Research
  • [Journal Article] Trung-Nghia Le, Yubo Cao, Tan-Cong Nguyen, Minh-Quan Le, Khanh-Duy Nguyen, Thanh-Toan Do, Minh-Triet Tran, Tam V. Nguyen2021

    • Author(s)
      Camouflaged Instance Segmentation In-The-Wild: Dataset, Method, and Benchmark Suite
    • Journal Title

      IEEE Transactions on Image Processing

      Volume: 31 Pages: 287-300

    • DOI

      10.1109/TIP.2021.3130490

    • Peer Reviewed / Int'l Joint Research

URL: 

Published: 2022-12-28  

Information User Guide FAQ News Terms of Use Attribution of KAKENHI

Powered by NII kakenhi