研究実績の概要 |
In this academic year of research, efforts have been focused along two axis. Along the first axis, self-supervised visual representations of the Generic Object ZSL (GOZ) Dataset images were proposed and compared with traditional supervised representations on the GOZ benchmark. These representations tend to perform better on standard zero-shot learning task whereas they do not match the supervised representations on the generalized zero-shot learning setting. Closing the gap between closely clustered supervised representations that perform well on training classes and more scattered unsupervised representations on the training classes while retaining higher accuracy on the unseen test classes has been identified as a promising research question. The second axis concerns the computational efficiency of Convolutional Neural Network (CNN) training. Indeed, training CNN on Imagenet scale dataset is computationally very expensive, which hinders the investigation of different training and fine-tuning strategies. Towards that end, we have focused our efforts on reducing both the amount of computations and the memory footprint of CNN training in order to enable larger batch training, and hence shorter training times.
|