2021 Fiscal Year Research-status Report
Prevention from Automated Analysis Services with Object-Level Adversarial Examples
Project/Area Number |
21K18023
|
Research Institution | National Institute of Informatics |
Principal Investigator |
レ チュンギア 国立情報学研究所, 情報社会相関研究系, 特任研究員 (00884404)
|
Project Period (FY) |
2021-04-01 – 2023-03-31
|
Keywords | Adversarial examples / Privacy protection |
Outline of Annual Research Achievements |
We proposed an adversarial example based method for attacking human instance segmentation networks. We developed a novel method to automatically identify attackable regions in the target image to minimize the effect on image quality. The fashion-guided synthesized adversarial textures are inconspicuous and appear natural to the human eye. The effectiveness of the proposed method is enhanced by robustness training and by jointly attacking multiple components of the target network. This work was published at Workshop on Media Forensics, CVPR 2021.
We analyzed class-aware transferability of adversarial examples to show the strong connection between non-targeted transferability of adversarial examples and same mistakes. Adversarial examples can have non-robust features that correlate with a certain class to which models can be misled. However, different mistakes occur between very similar models regardless of the perturbation size, which raises the question how adversarial examples cause different mistakes. We also demonstrated that non-robust features can comprehensively explain the difference between a different mistake and a same mistake by extending the framework of Ilyas et al. They showed that adversarial examples can have non-robust features that are predictive but human-imperceptible, which can cause a same mistake. In contrast, we showed that when the manipulated non-robust features in an adversarial examples are differently used by multiple models, those models may classify the adversarial examples differently. This work is submitted to ACM MM 2022 and under review.
|
Current Status of Research Progress |
Current Status of Research Progress
3: Progress in research has been slightly delayed.
Reason
We published a paper about adversarial attack to person segmentation at Workshop on Media Forensics, CVPR 2021. Another paper about adversarial analysis was submitted to ACM MM 2022 and under review. We are currently developing a new adversarial attack method targeting scene recognition systems.
|
Strategy for Future Research Activity |
We are developing a new adversarial attack method targeting scene recognition systems. We expect that the proposed method can be transferable and invisible. We will also develop new attack methods targeting vision-language systems in the next fiscal year.
|
Causes of Carryover |
Because of COVID pandemic, conferences have been held as virtual meetings. Therefore, I would like to transfer the budget to the next fiscal year. The next fiscal year's budget and transferred one will be mainly used for equipment, experiments, and conference fees.
|
Research Products
(5 results)