Prevention from Automated Analysis Services with Object-Level Adversarial Examples
Project/Area Number |
21K18023
|
Research Category |
Grant-in-Aid for Early-Career Scientists
|
Allocation Type | Multi-year Fund |
Review Section |
Basic Section 90020:Library and information science, humanistic and social informatics-related
|
Research Institution | National Institute of Informatics |
Principal Investigator |
レ チュンギア 国立情報学研究所, 情報社会相関研究系, 特任研究員 (00884404)
|
Project Period (FY) |
2021-04-01 – 2023-03-31
|
Project Status |
Discontinued (Fiscal Year 2022)
|
Budget Amount *help |
¥4,550,000 (Direct Cost: ¥3,500,000、Indirect Cost: ¥1,050,000)
Fiscal Year 2023: ¥1,430,000 (Direct Cost: ¥1,100,000、Indirect Cost: ¥330,000)
Fiscal Year 2022: ¥1,560,000 (Direct Cost: ¥1,200,000、Indirect Cost: ¥360,000)
Fiscal Year 2021: ¥1,560,000 (Direct Cost: ¥1,200,000、Indirect Cost: ¥360,000)
|
Keywords | Privacy protection / Adversarial examples / Adversarial Examples / Privacy Protection |
Outline of Research at the Start |
Users of social networks need solutions to protect privacy when sharing images which must be robust against data transformation and compression. This research investigates digital content protection using object-level adversarial examples against automated crawling and data analysis services.
|
Outline of Annual Research Achievements |
We proposed two protection systems based on adversarial examples. In the first system, we protect people from human instance segmentation networks by automatically identifying protectable regions to minimize the effect on image quality and synthesizing inconspicuous and natural adversarial textures. This system was published at CVPR Workshops 2021. In the second system, we protect location privacy against landmark recognition systems. In particular, we introduce mask-guided multimodal projected gradient descent (MM-PGD) to improve the protection against various deep models. We also investigated different protectable region identification strategies to defend against black-box landmark recognition systems without the need for much image manipulation. This work was accepted to WIFS 2022.
We also analyzed class-aware transferability of adversarial examples to show the strong connection between non-targeted transferability of adversarial examples and same mistakes. We demonstrated that non-robust features can comprehensively explain the difference between a different mistake and a same mistake by extending the framework of Ilyas et al. In particular, we showed that when the manipulated nonrobust features in an adversarial examples are differently used by multiple models, those models may classify the adversarial examples differently. This work was accepted to WACV 2023.
|
Report
(2 results)
Research Products
(10 results)