2005 Fiscal Year Final Research Report Summary
Analysis of online learning for a moving teacher
Project/Area Number |
15500151
|
Research Category |
Grant-in-Aid for Scientific Research (C)
|
Allocation Type | Single-year Grants |
Section | 一般 |
Research Field |
Sensitivity informatics/Soft computing
|
Research Institution | Kobe City College of Technology |
Principal Investigator |
MIYOSHI Seiji Kobe City College of Technology, Department of Electronic Engineering, Associate Professor, 電子工学科, 助教授 (10270307)
|
Project Period (FY) |
2003 – 2005
|
Keywords | statistical learning theory / online learning / ensemble learning / generalization error / Hebbian learning / perceptron learning / AdaTron learning / statistical mechanics |
Research Abstract |
1.Ensemble learning of K nonlinear perceptrons, which determine their outputs by sign functions, is analyzed within the framework of online learning and statistical mechanics. As a result, Hebbian learning, perceptron learning and AdaTron learning show different characteristics in their affinity for ensemble learning, that is "maintaining variety among students." Results show that AdaTron learning is superior to the other two rules with respect to that affinity. 2.Ensemble learning, in which a teacher and students are a committee machine and simple perceptrons respectively, is analyzed based on online learning theory and statistical mechanics. 3.Ensemble learning, in which a teacher and students are a non-3nonotonic perceptron and simple perceptrons respectively, is analyzed based on online learning theory and statistical mechanics. 4.The generalization performance of a new student supervised by a moving machine has been analyzed. A model composed of a fixed true teacher, a moving teacher and a student that are all linear perceptrons with noises has been treated analytically using statistical mechanics. It has been proven that the generalization errors of a student can be smaller than that of a moving teacher, even if the student only uses examples from the moving teacher. 5.The generalization performance of a student in a model composed of linear perceptrons: a true teacher, ensemble teachers, and the student has been analyzed. Calculating the generalization error of the student analytically using statistical mechanics in the framework of online learning, it is proven that when learning rate η<1, the larger the number K and the variety of the ensemble teachers are, the smaller the generalization error is. On the other hand, when η>1, the properties are completely reversed.
|
Research Products
(24 results)