研究課題/領域番号 |
21K18023
|
研究種目 |
若手研究
|
配分区分 | 基金 |
審査区分 |
小区分90020:図書館情報学および人文社会情報学関連
|
研究機関 | 国立情報学研究所 |
研究代表者 |
レ チュンギア 国立情報学研究所, 情報社会相関研究系, 特任研究員 (00884404)
|
研究期間 (年度) |
2021-04-01 – 2023-03-31
|
研究課題ステータス |
中途終了 (2022年度)
|
配分額 *注記 |
4,550千円 (直接経費: 3,500千円、間接経費: 1,050千円)
2023年度: 1,430千円 (直接経費: 1,100千円、間接経費: 330千円)
2022年度: 1,560千円 (直接経費: 1,200千円、間接経費: 360千円)
2021年度: 1,560千円 (直接経費: 1,200千円、間接経費: 360千円)
|
キーワード | Privacy protection / Adversarial examples / Adversarial Examples / Privacy Protection |
研究開始時の研究の概要 |
Users of social networks need solutions to protect privacy when sharing images which must be robust against data transformation and compression. This research investigates digital content protection using object-level adversarial examples against automated crawling and data analysis services.
|
研究実績の概要 |
We proposed two protection systems based on adversarial examples. In the first system, we protect people from human instance segmentation networks by automatically identifying protectable regions to minimize the effect on image quality and synthesizing inconspicuous and natural adversarial textures. This system was published at CVPR Workshops 2021. In the second system, we protect location privacy against landmark recognition systems. In particular, we introduce mask-guided multimodal projected gradient descent (MM-PGD) to improve the protection against various deep models. We also investigated different protectable region identification strategies to defend against black-box landmark recognition systems without the need for much image manipulation. This work was accepted to WIFS 2022.
We also analyzed class-aware transferability of adversarial examples to show the strong connection between non-targeted transferability of adversarial examples and same mistakes. We demonstrated that non-robust features can comprehensively explain the difference between a different mistake and a same mistake by extending the framework of Ilyas et al. In particular, we showed that when the manipulated nonrobust features in an adversarial examples are differently used by multiple models, those models may classify the adversarial examples differently. This work was accepted to WACV 2023.
|