研究実績の概要 |
Current Deep Neural Networks (DNN) are known to possess many vulnerabilities which make their application to many fields unsafe. In my past work, the One-Pixel Attack, it was revealed that even very small changes are able to change the classification of DNNs. Many defenses have been proposed, including Adversarial Training, however, all of them have the same vulnerabilities. In our last investigations, it was understood that the problem lies in the fact that DNNs focus on the texture rather than the shape in their representation. Generative Adversarial Networks (GAN), however, learn to encode, decode as well as transform images and are known to learn internally complex models of the input that goes beyond texture. Here, I proposed to tackle the robustness of DNNs by using the internal representation learned by GANs and datasets augmented with adversarial samples to create DNNs capable of classifying based on features that go beyond texture. Our new findings open up a new understanding of robustness for neural networks as well as pioneer a new paradigm with SyncMap. Related to this research investigation, this year's achievements were: 2 papers in journals, 7 papers in proceedings of international conferences and 2 awards.
|