2004 Fiscal Year Final Research Report Summary
Theory of Family of Learnings-From a Single Learning to Infinitely Many Learning-
Project/Area Number |
14380158
|
Research Category |
Grant-in-Aid for Scientific Research (B)
|
Allocation Type | Single-year Grants |
Section | 一般 |
Research Field |
Intelligent informatics
|
Research Institution | TOKYO INSTITUTE OF TECHNOLOGY |
Principal Investigator |
OGAWA Hidemitsu Tokyo Institute of Technology, Department of Computer Science, Professor, 大学院・情報理工学研究科, 教授 (50016630)
|
Co-Investigator(Kenkyū-buntansha) |
KUMAZAWA Itsuo Tokyo Institute of Technology, Imaging Science and Engineering Laboratory, Professor, 大学院・理工学研究科, 教授 (70186469)
|
Project Period (FY) |
2002 – 2004
|
Keywords | supervised learning / generalization capability / projection learning / partial projection learning / family of projection learning / SL projection learning / active learning / subspace information criterion |
Research Abstract |
In most of the existing supervised learning research, properties of individual learning methods such as the error back-propagation learning method or projection learning have been studied. However, the essence of learning problem can not be elucidated by such individual theories. For example, the error back-propagation algorithm just requires memorization, but it can provide a high level of generalization capability. In order to understand such phenomena, it is important to develop a theory of family of learnings for dealing with infinitely many learnings at the same time, rather than just developing a theory of individual learnings. The head investigator of this project introduced the concept of SL projection learning for the cases where the training input points are fixed, and constructed a theory of family of learnings. This theory enabled us to elucidate many unsolved problems such as the reason why the memorization learning can yield high generalization capability. However, the th
… More
eory was not easy to apply when the training input points are changing, e.g., in the cases of incremental learning or active learning. In order to extend this theory so that it is applicable to the cases where training input points change, we carried out the following research this year. First, we rigorously defined the notion of "same learning" for different training input points. In the previous work of our group, we have actually given three different definitions of the family of projection learning, and chose the SL projection learning because it is the most natural under fixed training input points. We gave a fresh look at this problem from the viewpoint of "same learning" and showed that T projection learning is more effective than SL projection learning when training input points change. We also elucidated the structure of the space which T operators form. Another important issue to be discussed is incremental active learning, where the next optimal input points are determined based on the learned results obtained so far. We also clarified this problem. Less
|
Research Products
(12 results)