2020 Fiscal Year Research-status Report
Facial Privacy and Forensic in The Wild: Explainable End-to-End Networks for Multi-Face Anonymization and Multi-Face Forgery Detection
Project/Area Number |
20K23355
|
Research Institution | National Institute of Informatics |
Principal Investigator |
レ チュンギア 国立情報学研究所, 情報社会相関研究系, 特任研究員 (00884404)
|
Project Period (FY) |
2020-09-11 – 2022-03-31
|
Keywords | Face forgery detection / Deepfake generation / Adversarial attack |
Outline of Annual Research Achievements |
We developed a forgery workflow to reduce the cost of synthesizing fake data. Our framework can generate an infinite number of fake individual identities using GAN models for non-target face-swapping without repeatedly training a deepfake AE. This framework has great potentials in deepfake generation and face anonymization.
We created a new large-scale dataset with high quality images for multi-face forgery detection and segmentation in-the-wild. It consists of 115K unrestricted images with 334K human faces. We also presented a benchmark suite to facilitate the evaluation and advancement of these tasks. Our submission to ICCV 2021 is under review.
Our paper about adversarial attack was accepted to Workshop on Media Forensics, CVPR 2021. This paper presents an adversarial example based method for attacking human instance segmentation networks. We propose a novel method to automatically identify attackable regions in the target image to minimize the effect on image quality. The fashion-guided synthesized adversarial textures are inconspicuous and appear natural to the human eye. The effectiveness of the proposed method is enhanced by robustness training and by jointly attacking multiple components of the target network.
|
Current Status of Research Progress |
Current Status of Research Progress
2: Research has progressed on the whole more than it was originally planned.
Reason
We created a new dataset for multi-face forgery detection and segmentation in-the-wild. This dataset can also be used for the task of face anonymization. This work is under review.
Our work about regional adversarial example based attack was accepted to CVPR Workshops 2021.
|
Strategy for Future Research Activity |
We are going to develop new explainable and interpretable methods based on XAI for multi-face forgery detection and segmentation in-the-wild in the next fiscal year. Utilizing XAI technology can output results that for people can understand and explain. We expect that it can help to improve the robustness against adversarial attacks.
In the next fiscal year, we will also develop new methods to improve face anonymization.
|
Causes of Carryover |
Because of COVID pandemic, conferences have been held as virtual meetings. Therefore, I would like to transfer the budget to the next fiscal year. The budget of next fiscal year and transferred one will be mainly used for dataset annotation and user study. A part of the budget will be paid for the conference fees.
|
Research Products
(1 results)