研究実績の概要 |
Our goal was to design AI systems that continue to learn and improve throughout their lifetime. This fiscal year we worked on concluding our work and scale up our algorithm on ImageNet data. We wrote one research paper on this topic which is currently under submission.
- (Under submission) Improving Continual Learning by Accurate Gradient Reconstructions of the Past, Erik Daxberger, Siddharth Swaroop, Kazuki Osawa, Rio Yokota, Richard E Turner, Jose; Miguel Hernandez-Lobato, Mohammad Emtiyaz Khan
This work uses our previous work to propose a new improvement in continual learning. It essentially combines two methods to get state-of-the-art performance at the tiny ImageNet level.
|