Project/Area Number |
08458076
|
Research Category |
Grant-in-Aid for Scientific Research (B)
|
Allocation Type | Single-year Grants |
Section | 一般 |
Research Field |
Intelligent informatics
|
Research Institution | TOKYO INSTITUTE OF TECHNOLOGY |
Principal Investigator |
OGAWA Hidemitsu Tokyo Institute of Technology, Graduate School of Information Science and Engineering, Professor, 大学院・情報理工学研究科, 教授 (50016630)
|
Co-Investigator(Kenkyū-buntansha) |
HIRABAYASHI Akira Tokyo Institute of Technology, Graduate School of Information Science and Engine, 大学院・情報理工学研究科, 助手 (50272688)
KUMAZAWA Itsuo Tokyo Institute of Technology, Graduate School of Information Science and Engine, 大学院・情報理工学研究科, 助教授 (70186469)
|
Project Period (FY) |
1996 – 1997
|
Project Status |
Completed (Fiscal Year 1997)
|
Budget Amount *help |
¥8,500,000 (Direct Cost: ¥8,500,000)
Fiscal Year 1997: ¥4,000,000 (Direct Cost: ¥4,000,000)
Fiscal Year 1996: ¥4,500,000 (Direct Cost: ¥4,500,000)
|
Keywords | Neural networks / Learning / Generalization ability / Incremental learning / Active learning / Over-learning / Admissibility / Realization of admissibility / 汎化 / 最適汎化ニューラルネットワーク / ニューラルネットワークの追加構成 / 訓練データの判別 / 訓練データの棄却 |
Research Abstract |
The problem of active learning was discussed from the point of view of nonlinear function approximation. The level of generalization ability achievable with a fixed number of training set is highly dependent on the quality of the data used. It is also interesting to note that many natural learning systems like humans are not simply passive, but make use of at least some form of active learning to examine the problem domain. By active learning, we mean any form of learning in which the learning program has some control over the inputs over which it trains. The key problems in active learning are 'optimal data selection' and 'incremental learning'. For the first problem, we gave a method for designing the training data which provide the optimal generalization ability with respect to the Wiener learning criterion. For the second problem, we devised computationally efficient means of incrementally computing the learning operator and the learned function when a new training datum is made avai
… More
lable. Note that the method provides the same generalization ability as the batch learning with the entire training data. These incremental results were successfully applied to the learning of sensorimotor maps of a 2-Degree Of Freedom (DOF) robot arm. We also proposed a new type of active learning, which is related to two learning criteria. The error-backpropagation algorithm is often used for learning in feedforward neural networks. However, it is said that a decrease of the training error does not imply a decrease of generalization error, but may lead to lower generalization ability. Such a phenomenon is called the over-learning. In order to overcome the problem, we have introduced the concept of admissibility, which guarantees for a learning criterion to become a substitution of another criterion. Furthermore, we have clarified that the admissibility between two learning criteria is controled by the training data used. In this study, we devised a method to realize admissibility by adding and/or deleting training data when admissibility does not hold for a given set of training data. Less
|